CN100559881C - A kind of method for evaluating video quality based on artificial neural net - Google Patents

A kind of method for evaluating video quality based on artificial neural net Download PDF

Info

Publication number
CN100559881C
CN100559881C CN 200810106132 CN200810106132A CN100559881C CN 100559881 C CN100559881 C CN 100559881C CN 200810106132 CN200810106132 CN 200810106132 CN 200810106132 A CN200810106132 A CN 200810106132A CN 100559881 C CN100559881 C CN 100559881C
Authority
CN
China
Prior art keywords
image
value
parameter
sigma
artificial neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200810106132
Other languages
Chinese (zh)
Other versions
CN101282481A (en
Inventor
姜秀华
孟放
许江波
周炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Communication University of China
Original Assignee
Communication University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Communication University of China filed Critical Communication University of China
Priority to CN 200810106132 priority Critical patent/CN100559881C/en
Publication of CN101282481A publication Critical patent/CN101282481A/en
Application granted granted Critical
Publication of CN100559881C publication Critical patent/CN100559881C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A kind of method for evaluating video quality based on artificial neural net belongs to the computer digit field of video processing.The spatial character of this evaluation algorithms by analyzing video image (fuzzy, entropy, blocking effect, frequency domain energy spectrometer, saturation) and time response (frame-to-frame differences) are come the extent of damage of computed image.The chrominance space saturation can effectively not improved the evaluation effect of algorithm as there being one of parameter with reference to evaluation algorithms.Native system is based on artificial neural net design, so the realization of algorithm comprises the training process and the test process of network.For selected training sample (sequence of video images), at first extract six above-mentioned parameters, and obtain the desired output (subjective assessment result) of training sample in training by subjective assessment.The characteristic parameter of training sample is imported artificial neural net with corresponding subjective assessment result as training parameter.Show that according to experiment the evaluation result that this objective evaluation system is obtained is consistent with the visual experience height of human eye.

Description

A kind of method for evaluating video quality based on artificial neural net
Technical field
The present invention relates to the computer digit field of video processing, designed a kind of no reference number method for evaluating video quality based on artificial neural net.
Background technology
Video image is the main forms of visual visual information, and it is one of present important field of research that video image is handled.Each link in the video image application as collection, compression, transmission, processing, storage etc., all can have influence on picture quality inevitably.Because image is finally watched for the user, therefore the correct evaluation for picture quality is one of key technology in the image information engineering.
The evaluation method of video image quality can be divided into two big classes: subjective assessment and objective evaluation.Subjective assessment is directly to utilize the observer to determined a kind of test of picture quality by the visually-perceptible of altimetric image, its measurement result is reliable, and the actual observation quality that meets image, therefore, occupy critical role in the image quality measurement field, at present existing corresponding international standard (as ITU-R BT.500, ITU-R BT.710).But the test environment of subjective assessment requires very strict, the implementation procedure complexity, and do not have repeatability.Along with the continuous expansion of video image application, under some applied environment, can't carry out effective subjective assessment.And the objective evaluation algorithm is set up Mathematical Modeling by analyzing video processnig algorithms and human visual system, to realize the automatic measurement to picture quality.This type of algorithm can be embedded into each link of Video processing, to video flow quality carry out quantitative analysis and can further analyze with the Adjustment System parameter to improve final image display effect.Therefore, picture quality objective evaluation algorithm is a present research focus.
Traditional evaluating objective quality algorithm is representative with MSE (mean square error) and PSNR (Y-PSNR).This type of evaluation algorithms is theoretical intuitively, calculating is simple, can obtain measurement result preferably in the ordinary course of things.But this objective evaluation algorithm based on signal is not considered the apperceive characteristic of human eye when watching image, and therefore in some cases, measurement result that is obtained and subjective sensation are also inconsistent.In recent years, fully analyzing on video processnig algorithms and the vision perception characteristic basis, each research institution has proposed a lot of improvement algorithms, and to the degree of dependence of distorted image not, we can be divided into algorithm three kinds according to algorithm:
1. based on the video quality evaluation of full reference frame (FR-Full Reference)
Undistorted raw video image can be used for the objective evaluation algorithm of picture quality fully, and therefore, measurement result is relatively accurate.But have following problem: (1) has considered that the evaluation algorithms data volume of original video is huge, the computational complexity height; (2) in a lot of application scenarios, undistorted image can not obtain at receiving terminal;
2. based on the video quality evaluation that reduces reference frame (RR-Reduced Reference)
The characteristic parameter of undistorted raw video image can be used in the objective evaluation algorithm of picture quality.This algorithm at first defines and extracts the various features parameter of raw video image, by a narrower auxiliary channel of bandwidth these parameters is transferred to receiving terminal then.Same characteristic parameter at receiving terminal extraction damaged image to be measured by the statistical analysis to this two stack features parameter, obtains the objective evaluation result of damaged image quality.The FR algorithm of comparing, the required original image information of this algorithm reduces in a large number, and implementation complexity reduces; Simultaneously, owing to considered original image, this algorithm still has higher confidence level.But have following problem: (1) is analyzed theoretically, and the statistics of some impaired video images and corresponding undistorted image are identical probably, but differ widely on subjective vision; (2) statistics owing to image is easy to change with the change of factors such as viewing distance, brightness range, even obtains diverse statistics, and this type of algorithm can't be considered this situation fully; (3) in some applied environment, can't increase the characteristic parameter that auxiliary channel transmits undistorted image.
3. based on the video quality evaluation of no reference frame (NR-No Reference)
Need not the information of any original image, by extracting the various features parameter of damaged image, seek such as the mpeg block edge, point-like noise or image deflects such as image blurring are obtained the evaluation result of picture quality.These class methods are without any need for the data from raw video image, only can realize mass measurement to impaired video image at receiving terminal, and requirement and subjective assessment result have consistency preferably, and therefore, the research difficulty of this type of algorithm is bigger.Owing to do not need original image information, do not have to put on a plurality of links of Video Applications system and carry out on-line testing with reference to the objective evaluation algorithm.Native system has also proposed the digital video image Objective Quality Assessment system of the no reference of a cover just on this basis.
Nothing is with reference to the domestic and international present Research of algorithm
1. domestic present Research
People such as Wang Xindai have proposed a kind of no-reference video quality evaluation algorithms in 2004, can better be applied to the quality evaluation of video traffics such as wireless and IP.This method is by extracting the watermark that is embedded in advance in the compressed video, and compares with the original watermark copy of receiving terminal storage, thereby realizes the evaluation to video quality.Because this system need add watermark information in primary signal, can destroy the quality of image to a certain extent, in addition, also needs original watermark information as a comparison in receiving end, therefore has certain application limitations.
People such as Yin Xiaoli have proposed a kind of non-reference picture quality appraisement method based on half fragility digital watermarking algorithm in 2006.The result is better in this system evaluation, but have as above-mentioned same problem, and this algorithm is based on rest image in addition, can't handle video.
People such as Wang Zhengyou have proposed a kind of non-reference picture quality appraisement method based on shielding effect in the angle from noise measuring in 2006.At first the Hosaka piecemeal is improved, cancelled of the restriction of this method picture size.By piecemeal, image is made a distinction with different frequency contents, calculate the noise of each height piece then.According to the pollution level of image, the non-reference picture Y-PSNR NPSNR based on shielding effect has been proposed.Experimental result shows that this method has and do not have reference, computational complexity is low, subjective and objective than characteristics such as unanimities.This method is just come the analysis image quality based on noise, but noise mainly is the main damage in the simulated television system, so it is not suitable for digital video.
People such as poplar Fu Zheng have proposed a kind of reference-free quality evaluation method that is applicable to based on the block encoding video in 2006.At first cover and cover characteristic with contrast and proposed a blocking artifact that meets the subjective vision perception and estimate in conjunction with the brightness of human vision, according to the influence of filtering, provided a kind of quality evaluating method that is suitable for using different compressions and Processing Algorithm then based on the block encoding reconstructing video to blocking artifact.Experiment shows that this quality evaluation estimates with the subjective quality evaluation consistency is preferably arranged.But this algorithm is only analyzed the blocking effect in the image, and parameter obviously is the total quality that is not enough to reflect image.And this algorithm is based on human eye and covers characteristic, and will be correctly also be unusual difficulty with covering characteristic formulism.
2. foreign study present situation
People such as Pina Marziliano have proposed a kind of nothing of the video image of analyzing based on fog-level with reference to evaluation method in 2002, this method is that the extended area of image border is analyzed, and sets threshold of perception current according to subjective feeling.This method computation complexity is low, and speed is near real-time broadcasting speed.Because the assess performance of this method will depend on the effect of rim detection, so the application of algorithm will be subjected to certain limitation.
The non-reference picture quality appraisement method that people such as Hanghang Tong proposed a kind of JPEG2000 of being applied in 2004, by marginal points all in the image being classified as impaired or not having impaired, use PCA that given edge pixel point is extracted local feature then, to judge that it is whether fuzzy or ringing effect is arranged.Also use the whether impaired extent of damage of judging topography of edge pixel point simultaneously, can be applied to all kinds of local features.The effect of this method also depends on the effect of rim detection to a great extent, but does not also have very perfect edge detection algorithm at present, so this algorithm effects also will be restricted.And this algorithm also is merely able to analyze at rest image, and can not handle video.
People such as Remco Muijs have proposed a kind of non-reference picture quality appraisement method based on signature analysis in 2005.Mainly the blocking effect in the image is analyzed, because it is the key factor of influence based on piecemeal compressed image quality.By the position of detecting piece and the degree that observability is come the decision block effect.This algorithm is to carry out breakdown diagnosis according to the position of block edge, but the spatial deviation of image will cause the skew of block edge, and makes result of calculation inaccurate.And this method only analyzed a kind of damage in the image, i.e. blocking effect, and obviously it is the oeverall quality that is not enough to reflect image.
Other external correlative studys also comprise by regression algorithm calculates subjective and objective fitting parameter and realizes that by the training of human artificial neural networks nothing is with reference to the objective evaluation algorithm.But all lack multianalysis to the video image characteristic parameter.Algorithm exists certain limitation more at present, can't provide the total quality of image or video, and also can be subjected to various restrictions in application.
Summary of the invention
In order to overcome the deficiency of present evaluation method, the present invention has designed a kind of no-reference video quality evaluating method based on artificial neural net.This evaluation method is taken all factors into consideration the various features parameter of video image, and gathers as the input of objective evaluation system in conjunction with the visual characteristic definition and the extraction designated parameter of human eye, and the subjective assessment result of definition correspondence is as the output set of evaluation system.By selecting test sample book, and the subjective assessment result who obtains test sample book realizes the training to neural net.Experiment shows, the artificial neural net after the training can obtain and the subjective assessment consistent objective evaluation result of height as a result.
Technical thought of the present invention is characterised in that:
1, a kind of no-reference video quality evaluating method has been proposed.This method can be come the computed image extent of damage by the room and time statistical property of analyzing video image.These characteristics comprise: space characteristics (fuzzy.Entropy, blocking effect, frequency domain energy spectrometer, saturation) and time response (frame-to-frame differences).
2, considered the damage of chrominance space.Monochrome information is not occupied an leading position in having with reference to the evaluation algorithms design at present, and chrominance space almost is left in the basket.The colourity damage that native system is analyzed can effectively improve evaluation result.
3, analyze every characteristic of video image and making up, as input parameter based on the evaluation system of artificial neural net.
System framework of the present invention is referring to Fig. 1.Because this system is based on the artificial neural net design, therefore comprise two major parts of training and testing.We are referred to as " training sample " video image of participation training, are used for the video image called after " test sample book " of detection system performance.This method comprises the steps: successively
1), for training sample, we at first extract the characteristic parameter (leaching process is as shown in Figure 2) of video image, as the input parameter of artificial neural net; Simultaneously, obtain the subjective assessment result of training sample by subjective assessment, as the desired output result of artificial neural net;
2), deposit above-mentioned getting parms in parameter database, be used for the training of human artificial neural networks;
3), the training process of executor's artificial neural networks (training process of neural net as shown in Figure 3): for each input sample, we are with the input as system of six characteristic parameters of its correspondence, correspond respectively to six input nodes of artificial neural net, and should import the desired output of the desired output (subjective testing result) of sample correspondence as system.According to the difference value of desired output with actual output, promptly error is adjusted each internodal connection weights in the artificial neural net.
4), when satisfying termination condition (native system with the accumulated error of each all samples of training as controlled condition, accumulated error in certain training is less than specified threshold value, then training process finishes), promptly set up with reference to evaluation model based on the nothing of artificial neural net and to finish.Afterwards, the sample that can pass a test detects the systematic function of this model; The appointment of threshold value is generally 10 -4, but concrete numerical value depends on the number of samples and the characteristic of participating in training, in different applications, the training stop condition need be according to the actual conditions adjustment.
5), for the input test sample book, extract same characteristic parameter (as shown in Figure 2), be input to then in the model that has trained, can obtain quality evaluation result (test process is as shown in Figure 5) to test sample book.
Explanation for each module in Fig. 1 and the above-mentioned steps is then carried out according to the order of figure numbering.
Characteristic extracting module: as shown in Figure 2, video image for input, at first extract nine characteristic parameters, be respectively image activity, average gradient, edge energy statistics, zero-crossing rate statistics, entropy, blocking effect, frequency domain Energy distribution characteristic, eight spatial character parameters of saturation and time response parameter of frame-to-frame differences.Wherein, except saturation was the characteristic parameter of statistics chrominance space, other parameters were all from the brightness space of video image.In addition, we average weighting with preceding four spatial characters, are defined as the fuzzy parameter of image.
Image active A ctivity Image: be the characteristic quantity that mainly reflects the image detail texture, be the image brightness first-order difference statistics of image in level and vertical direction.Earlier each row of image is made forward difference, and calculates its quadratic sum,, and then statisticses of all row are added up, obtain this horizontal direction image activity, be shown below as the statistics of this row:
D ( i ) = Σ j = 1 M - 1 ( Y ( i , j ) - Y ( i , j - 1 ) ) 2 , ( 0 ≤ i ≤ N - 1 ) - - - ( 1 )
H 1 = Σ i = 0 N - 1 D ( i ) - - - ( 2 )
Wherein, Y is the brightness space of image, and (i is that image i is capable, the pairing brightness value of j row pixel j) to Y.M, N be respectively level and vertical direction number of pixels (in the following formula, variable Y (i, j), the definition of M, N is all identical therewith).D (i) is the capable statistics of i, H 1Be the image activity of horizontal direction, the active V of image that uses the same method and to obtain vertical direction 1, both additions can be obtained the image activity (Activity of entire image Image=H 1+ V 1).
Average gradient Ave_Gradient: be the characteristic quantity of reflection image detail texture, obtain by finding the solution the pixel second differnce.Specifically be calculated as follows, ask each pixel second differnce in the horizontal and vertical directions earlier, ask quadratic sum then, divided by 2, calculate square root again, the Grad to all pixels adds up at last, and, obtain the average gradient value of entire image divided by total pixel count.Formula is as follows:
G ( i , j ) = ▿ j 2 Y ( i , j ) + ▿ i 2 Y ( i , j ) 2 - - - ( 3 )
Ave _ Gradient = Σ i = 1 N - 2 Σ j = 1 M - 2 G ( i , j ) ( M - 2 ) × ( N - 2 ) - - - ( 4 )
Wherein, With Be respectively image pixel (i, the j) gradient of the level of position and vertical direction, following calculating:
▿ j Y ( i , j ) = ( Y ( i , j - 1 ) - Y ( i , j ) ) - ( Y ( i , j ) - Y ( i , j + 1 ) ) - - - ( 5 )
▿ i Y ( i , j ) = ( Y ( i - 1 , j ) - Y ( i , j ) ) - ( Y ( i , j ) - Y ( i + 1 , j ) ) - - - ( 6 )
Edge energy statistics Edge_Energy: a very important feature is exactly the edge of image characteristic in the image, its feature is mild along the pixel variation of edge trend, and change perpendicular to the pixel of edge trend violent, according to this characteristic, edge that can statistical picture.The defined formula of edge energy statistics is as follows,
e(i,j)=E 1(Y(i,j))+E 2(Y(i,j)) (7)
Edge _ Energy = 1 ( M - 2 ) × ( N - 2 ) Σ i = 1 N - 2 Σ j = 1 M - 2 e 2 ( i , j ) - - - ( 8 )
Wherein, E 1And E 2Be two 3*3 templates, concrete value: E 1 = | - 1 - 1 1 - 1 4 - 1 1 - 1 - 1 | , E 2 = | 1 - 1 - 1 - 1 4 - 1 - 1 - 1 1 | .
Zero-crossing rate statistics ZC: be the characteristic quantity of reflection image border details, the symbol by more adjacent two first-order differences obtains, and adds up respectively for the zero-crossing rate of level and vertical direction.Zero-crossing rate with horizontal direction is calculated as example, specifically be calculated as follows: the brightness value to each row pixel carries out first-order difference earlier, and if the symbol of more adjacent then two first-order differences is two adjacent first-order difference contrary signs, then the statistical value of zero-crossing rate just is 1, otherwise is 0.Following formula,
Figure C20081010613200114
Z h = Σ i = 0 N - 1 Σ j = 1 M - 2 z h ( i , j ) - - - ( 10 )
Wherein, Z hBe the zero-crossing rate statistic of horizontal direction, the zero-crossing rate statistic Z that uses the same method and to obtain vertical direction vBoth additions can be obtained the zero-crossing rate statistic (ZC=Z of image h+ Z v).
These four parameters average weighting, can obtain first spatial parameter of image: fuzzy parameter Blur Image(see figure 2).Computing formula is as follows:
Blur image = Activity image + Ave _ Gradient + Edge _ Energy + ZC 4 - - - ( 11 )
Entropy Entropy Image: an index of the contained amount of information size of reflection image, the entropy solution formula in computational methods such as the information theory.Monochrome information is found the solution the following formula of entropy,
Entropy image = Σ i = 1 L p ( x i ) log 2 1 p ( x i ) - - - ( 12 )
Wherein, L represents the number of the gray scale that occurs, p (x i) represent the probability (number of times that definition of probability occurs for this gray scale is divided by the total pixel number of image) of this gray scale, the Entropy that obtains at last ImageIt is exactly the entropy of entire image.
Blocking effect Block Image: in the coding and decoding video algorithm, the pseudo-edge that blocking effect caused shows as 8 to be signal pseudoperiod in cycle more, therefore can come blocking effect is analyzed from frequency spectrum.With the horizontal direction is example, at first calculate the first-order difference of every row and ask absolute value, and as follows to the capable solution formula of i,
f i(j)=|Y(i,j+1)-Y(i,j)| 0≤j≤M-2 (13)
The if block effect is apparent in view, then the capable signal f of i i(j) will have in one pseudoperiod signal, the cycle is 8.To f i(j) carry out zero padding, making its length is 2 integral number power, obtains f ' i(j), then it is carried out fast fourier transform, and calculate the amplitude spectrum of fourier coefficient, more same operation is carried out in each provisional capital, at last the amplitude spectrum of the Fourier transform of all row is added up.So just obtained the array of the frequency domain statistics of a horizontal direction, as shown in the formula
F ( n ) = Σ i = 0 N - 1 FFT ( f i ′ ( j ) ) , 0 ≤ n ≤ L - 1 - - - ( 14 )
If L is the length after the zero padding, F (n) to be analyzed and can find, the if block effect is apparent in view, and then (position L/2) is peak value will occur on the characteristic frequency position for L/8, L/4 at 1/8 multiple of F (n) length.Blocking effect is obvious more, and the peak value of characteristic frequency point is big more, and promptly the cycle is that the intensity of 8 signal is strong more.Because the appearance of these peak values mainly is because blocking effect causes, so that reflection spatially is the gap at edge between the adjacent block is big more.When computing block effect degree, can on the position that these peak values may occur, analyze, to determine that at first it is a peak value, promptly this value is greater than the value of the right and left; If peak value then carries out medium filtering at this point, the window of filtering can be 3 or 5, peak value is deducted the value that obtains behind the medium filtering DC coefficient (F (0)) divided by amplitude spectrum, promptly obtains reflecting the blocking effect characteristic value B in the horizontal direction of image hMasking characteristics if decide like this in the consideration human-eye visual characteristic.Because after first-order difference, take absolute value,, show that the correlation between this delegation's pixel is more little so DC coefficient is big more, the variation that is image is big more, according to the visual characteristic of human eye, in the many places of marginal information, be the violent place of image change, damage just is not easy to be discovered.Though,, and just can better react later the visual characteristic of human eye divided by DC coefficient owing to relatively more violent the becoming of the interior pixel variation of its piece is not easy to be discovered so the difference of adjacent tile edges is the same.
Same method is used for vertical direction can obtains the characteristic value B of blocking effect in vertical direction v, the characteristic value that on average is total blocking effect is asked in both additions:
Block image = B h + B v 2 - - - ( 15 )
The Fourier transformation computation amount is bigger owing to each provisional capital is carried out, and compares time-consuming, so done further improvement on this basis.Still be example with the horizontal direction, in improving algorithm, not simply the first-order difference of every row to be carried out fast fourier transform, but after the first-order difference of several rows added up, again the array that adds up is carried out fast fourier transform, and calculate the amplitude spectrum of fourier coefficient.Position at the L/8 of amplitude spectrum is that the characteristic frequency position is analyzed by the method for being introduced before, obtains the horizontal direction blocking effect characteristic value of these several row.Ensuing several row are done same operation, and the characteristic value addition of all horizontal directions is asked on average, be blocking effect characteristic value in the horizontal direction.Several row are added up handle, it is capable to be generally 8-16, can reflect the blocking effect degree in a zone, and delegation just not only, this is more reasonable for the masking characteristics in the human-eye visual characteristic.Can obtain the characteristic value of blocking effect vertical direction after the same method.Method after the improvement improves a lot on speed, and better effects if.In experiment, we carry out fast fourier transform after 16 first-order differences of going are added up again.
Frequency domain energy spectrometer Fre_Energy: in conjunction with human visual system (HSV) characteristic, consider that human eye has different sensitivitys for frequency components different in the image, extract the frequency domain Energy distribution feature of present image based on the contrast sensitivity function (CSF) of people such as Mannos and Sakrison foundation.Can be by analyzing the CSF indicatrix with the multi-stage bandpass filter, therefore, in the process of extracting the image frequency domain Energy distribution, can be earlier one group of directive band pass filter with picture breakdown, each filter be only made response to spatial frequency and direction near certain zone its centre frequency.In computed image frequency domain energy, generally obtain the DCT of frequency domain parameter, conversion such as FFT all can be used for the frequency domain character of analysis image, but because the data structure of wavelet decomposition and the multichannel characteristic of visually-perceptible have great similitude, thereby adopt wavelet transformation as analysis tool here.Concrete steps are as follows:
Luminance picture is carried out the level Four wavelet transformation, and shown in Fig. 4 (a), in this article, employed small echo is the W53 small echo, its high-pass coefficient and low-pass coefficients be respectively 0.25 ,-0.5,0.25}, 0.125,0.25,0.75,0.25 ,-0.125}.Amplitude to the wavelet coefficient after changing is added up respectively, promptly tries to achieve the quadratic sum with the wavelet amplitude of one-level, and divided by the total sample of this one-level, and the value that obtains is the ENERGY E (Lx) on the frequency band of this one-level correspondence, and wherein the x value is 0,1,2,3,4.Utilize the non-linear bandpass characteristics of CSF then, the wavelet coefficient of different spaces frequency band after the wavelet decomposition is weighted, weighted value is the mean value of CSF curve in frequency band, sees Fig. 4 (b).The frequency domain energy calculation as shown in the formula, Fre_Energy is the frequency domain energy distributions feature that obtains image.
Fre_Energy=2.25×E(L0)+2.87×E(L1)+3.16×E(L2)+2.56×E(L3)+1.00×E(L4) (16)
In this process, what resulting value reflected is the shared ratio of composition of human eye sensitivity in the image, as image (component that has only lowest frequency) for pure color, perhaps be the image (the frequency domain energy almost concentrates on the part of high frequency entirely) of noise entirely, human eye is to see what interested content, does not also have sensitive composition certainly.
Saturation Chrome Image: the damage of picture quality on color shows as the decline of colourity saturation more.Specifically be calculated as follows, utilize the mould value of two chromatic components of image pixel, promptly calculate the quadratic sum of U and V component and ask its square root, the statistics of all mould values that add up again, and, promptly obtain the saturation of colourity divided by the number of chromatic component.Be shown below,
Chrome image = 1 M UV × N UV Σ i = 0 N UV - 1 Σ j = 0 M UV - 1 U 2 ( i , j ) + V 2 ( i , j ) - - - ( 17 )
Wherein, U (i, j) and V (i is respectively that image i is capable, the pairing colourity value of j row carrier chrominance signal j), M UV, N UVBe that carrier chrominance signal is at level and vertical direction pixel count order.For four kinds of common chroma, the relation of the value of these two variablees and M, N is as follows:
When sample format is 4:4:4, M UV=M, N UV=N;
When sample format is 4:2:2, M UV = M 2 , N UV=N;
When sample format is 4:1:1, M UV = M 4 , N UV=N;
When sample format is 4:2:0, M UV = M 2 , N UV = N 2 ;
Frame-to-frame differences Diff_Frame: the extraction of temporal characteristics, this is important parameters very in the video image quality evaluation.This feature is found the solution poor based on the monochrome information of front and back two width of cloth images, as shown in the formula,
Diff _ Frame = Σ i = 0 N - 1 Σ j = 0 M - 1 | Y ( i , j , t + 1 ) - Y ( i , j , t ) | M × N - - - ( 18 )
Wherein, variable t is the time shaft parameter of video sequence.This parametric solution of frame-to-frame differences be the average information of the luminance difference absolute value of back one frame and former frame respective pixel.
Six characteristic parameters of the training sample of characteristic extracting module output simultaneously, in training process, also need to provide the subjective assessment result of these training samples, as the desired output data of system in the training stage as the input parameter of artificial neural net.
Description of drawings
Fig. 1 is the system block diagram of video quality evaluation system.
Fig. 2 is the schematic diagram of characteristic parameter extraction module.
Fig. 3 is based on the artificial neural net block diagram of BP algorithm.
Fig. 4 is a schematic diagram of determining the corresponding weight coefficient of wavelet decomposition gained different frequency bands according to the CSF curve.Wherein, Fig. 4 (a) is the different spaces frequency band of level Four small echo after changing and the schematic diagram of corresponding weight coefficient, and Fig. 4 (b) be the schematic diagram that the CSF curve defines in corresponding space frequency band weighted value.
Fig. 5 is the flow chart that cycle tests carries out objective evaluation.
Fig. 6 is based on the main interface of the video evaluation algorithm of artificial neural net.
Fig. 7 is the training sample schematic diagram.
Fig. 8 is the subjective and objective fitness analysis of training sample in test process, and wherein abscissa is the objective evaluation result of system's output, and ordinate is the subjective assessment result.
Fig. 9 is the test sample book schematic diagram.
Figure 10 is the subjective and objective fitness analysis of test sample book in test process, and wherein abscissa is the objective evaluation result of system's output, and ordinate is the subjective assessment result.
Embodiment
In the system block diagram of Fig. 1, all from the video test sequence of standard, choosing of this type of sequence should be in strict accordance with the standard of ITU-R BT.1210, particularly training sample for training sample and test sample book.The sequence storage can be adopted YUV file and other canonical forms.The subjective assessment algorithm adopts is that two in the international standard stimulate continuous mass scale method, with DMOS (Difference Mean Opinion Score) as the required subjective assessment result of sample.In addition, need explanation a bit, during system framework in front and each module are introduced, our two field picture is as a basic processing unit, but for interleaved video display format, the base unit that calculates is one, only above-mentioned processing is carried out in the field, top of a frame picture and get final product in processing procedure.
Video quality evaluation is mainly realized by software, comprises the quality evaluation of characteristic parameter extraction, network training, list entries etc.Below by the implementation procedure of the objective evaluation of high definition television video (hereinafter to be referred as " HD video ") being described in detail system.
The multilayer neural network that is based on the BP learning algorithm that native system adopts is called for short the BP network here.
The artificial neural net training module: as shown in Figure 3, the BP network of native system design only contains a hidden layer; Six inputs of input layer definition node corresponds respectively to six characteristic parameters; Output layer has only a node, the evaluation result of corresponding video quality, and in the training stage, this node is a desired output, i.e. the subjective assessment result of corresponding training sample.Here the conventional training algorithm (steepest gradient descent method) of the BP network using of definition is introduced here no longer in detail.The artificial neural net that trains can be realized the objective evaluation to video image quality, and for the cycle tests of input, the input test module is to detect the performance of this evaluation system.
Test module: testing process as shown in Figure 5.For the cycle tests of input, concrete performing step is as follows:
501: open test sequence file, prepare reads image data;
502: read present frame F CWith next frame F NView data;
503: extract previously defined six characteristic parameters;
504: each characteristic parameter is sued for peace respectively, for the character pair parameter of calculating whole sequence is prepared;
505: judge F NWhether frame is the last frame of this sequence, if change 506 over to; If not, change 502 over to;
506: find the solution the average of each characteristic parameter according to the totalframes of sequence, promptly obtain the final characteristic parameter of this sequence correspondence;
507: with the input value of above-mentioned six sequential parameter values finding the solution, by the objective evaluation result of the network calculations list entries that trained as artificial neural net;
508: the objective evaluation result who exports this cycle tests video quality.
Fig. 6 is system master interface, can realize the browsing of preview, video various features CALCULATION OF PARAMETERS, parameter database of video to be measured, and realizes the operations such as objective evaluation to cycle tests.
At first we select to satisfy eight sections original HD videos (yuv format, 4:2:0,10~14 seconds) of test request, and obtain 8M, 10M, 12M, 14M, 16M, 18M, 20M and 25M totally eight compressed bit streams by MPEG2 hardware compression device.Add undistorted original series, we have foundation and test that 72 HD videos are used for system.
Secondly, we obtain the subjective assessment result of these high definition sequences by the continuous mass scale method of two stimulations.For training sample, the subjective assessment result is used for the adjustment model parameter as the desired output in the model training process; For test sample book, the subjective assessment result is used for the objective evaluation result's of analytical system output accuracy, promptly by calculating subjective and objective result's coefficient correlation, judges the assess performance of this system.
Training process: we select all code checks of six sequences as training sample set (this set is totally 54 training samples), and the content of six training sequences as shown in Figure 7.Extract each sample six parameters, deposit parameter database in together with the subjective assessment result of correspondence, be used for the training of neural net.In the artificial neural net design, the node number of hidden layer is defined as 16, factor of momentum 0.95, and the initial value 0.0001 of learning rate (in training, can adjust), the initial value that connects weights between node is a random number.When satisfying the training end condition (threshold value of definition cumulative errors is 0.0004 in realizing at present), model training is finished.For the system that trains, we import training sample once more, and the measurement result of analytical system output and subjective assessment result's fitting degree, and as shown in Figure 8, the coefficient correlation of two groups of data is 0.958.
Test process: our definition has neither part nor lot in two other video of training and all compression bit rates thereof as test sample book collection (totally 18 test sample books), and the content of two cycle testss as shown in Figure 9.Similarly, six parameters of first abstraction sequence, the output evaluation result of computing system, i.e. objective evaluation result then.Because these sequences never appear in the training process, therefore, we can come evaluation system reliability in actual applications by this test process.Figure 10 has provided the picture quality for test sample book, the objective evaluation result of system evaluation gained and subjective assessment result's correlation, and promptly coefficient correlation is 0.930.
System advantage is analyzed: the no-reference video quality objective evaluation system based on artificial neural net can be effectively in conjunction with the subjective assessment result with high reliability who generally acknowledges, therefore match human visual system farthest makes objective evaluation result and subjective assessment result have higher correlation.Thereby make this system can replace the subjective assessment of time-consuming, effort, cost costliness to a certain extent, this also is the meaning of objective evaluation.This system is after finishing training, in practical video quality evaluation process without any need for data from raw video image, and only can realize measurement quick and precisely to impaired video quality at receiving terminal, further, this system also can be used as measurement module and is placed on a plurality of intermediate links of Video Applications system and carries out on-line testing, to realize application system planning and the performance evaluation based on video quality.

Claims (3)

1. method for evaluating video quality based on artificial neural net, this method comprises the steps: successively
1), for training sample, at first extract the characteristic parameter of this sample, as the input parameter of artificial neural net; Simultaneously, obtain the subjective assessment result of training sample by subjective assessment, as the desired output result of artificial neural net;
2), get parms, comprise the characteristic parameter and the corresponding subjective assessment result of sample, deposit parameter database in, be used for the training of human artificial neural networks above-mentioned;
3), the training process of executor's artificial neural networks: for each input sample, we are with the input of its characteristic of correspondence parameter as system, correspond respectively to the input node of artificial neural net, and should import the desired output of the subjective assessment result of sample correspondence as system; According to the difference value of desired output, adjust each internodal connection weights in the artificial neural net with actual output;
4), when satisfying termination condition, promptly set up with reference to evaluation model based on the nothing of artificial neural net and to finish; Afterwards, the sample that passes a test detects the systematic function of this model;
5), for the input test sample book, extract same characteristic parameter, be input to then in the model that has trained, promptly obtain quality evaluation result to test sample book;
It is characterized in that the characteristic parameter that extracts video image in the step 1) is specially:
Extract nine characteristic parameters, be respectively image activity, average gradient, edge energy statistics, zero-crossing rate statistics, entropy, blocking effect, frequency domain Energy distribution characteristic, eight spatial character parameters of saturation and time response parameter of frame-to-frame differences; And preceding four spatial character parameters are averaged weighting, be defined as the fuzzy parameter of image;
Image active A ctivity Image:
Ask the active H of horizontal direction image 1, be shown below:
D ( i ) = Σ j = 1 M - 1 ( Y ( i , j ) - Y ( i , j - 1 ) ) 2 , ( 0 ≤ i ≤ N - 1 ) - - - ( 1 )
H 1 = Σ i = 0 N - 1 D ( i ) - - - ( 2 )
Wherein, Y is the brightness space of image, and (i is that image i is capable, the pairing brightness value of j row pixel j) to Y; M, N are respectively the number of pixels of level and vertical direction; D (i) is the capable statistics of i, H 1Image activity for horizontal direction; The active V of image that uses the same method and obtain vertical direction 1, both additions are promptly obtained the image active A ctivity of entire image Image=H 1+ V 1
In the following formula, and variable Y (i, j), the definition of M, N is all same as described above;
Average gradient Ave_Gradient:
Ask earlier each pixel average gradient value G (i, j), the Grad to all pixels adds up then, and divided by total pixel count, obtains the average gradient value Ave_Gradient of entire image, formula is as follows:
G ( i , j ) = ▿ j 2 Y ( i , j ) + ▿ i 2 Y ( i , j ) 2 - - - ( 3 )
Ave _ Gradient = Σ i = 1 N - 2 Σ j = 1 M - 2 G ( i , j ) ( M - 2 ) × ( N - 2 ) - - - ( 4 )
Wherein, With
Figure C2008101061320003C4
Be respectively image pixel (i, the j) gradient of the level of position and vertical direction are calculated according to following formula:
▿ j Y ( i , j ) = ( Y ( i , j - 1 ) - Y ( i , j ) ) - ( Y ( i , j ) - Y ( i , j + 1 ) ) - - - ( 5 )
▿ i Y ( i , j ) = ( Y ( i - 1 , j ) - Y ( i , j ) ) - ( Y ( i , j ) - Y ( i + 1 , j ) ) - - - ( 6 )
Edge energy statistics Edge_Energy:
Find the solution the margin signal energy of each pixel earlier, the edge energy of finding the solution entire image again, as shown in the formula,
e(i,j)=E 1(Y(i,j))+E 2(Y(i,j)) (7)
Edge _ Energy = 1 ( M - 2 ) × ( N - 2 ) Σ i = 1 N - 2 Σ j = 1 M - 2 e 2 ( i , j ) - - - ( 8 )
Wherein, E 1And E 2Be two 3*3 templates, concrete value: E 1 = - 1 - 1 1 - 1 4 - 1 1 - 1 - 1 , E 2 = 1 - 1 - 1 - 1 4 - 1 - 1 - 1 1 ;
Zero-crossing rate statistics ZC:
The zero-crossing rate of difference calculated level and vertical direction, with the horizontal direction zero-crossing rate is example, be calculated as follows: the brightness value to each row pixel carries out first-order difference earlier, the symbol of more adjacent then two first-order differences, if two adjacent first-order difference contrary signs, then the statistical value of zero-crossing rate just is 1, otherwise is 0;
Following formula,
Figure C2008101061320003C10
Z h = Σ i = 0 N - 1 Σ j = 1 M - 2 z h ( i , j ) - - - ( 10 )
Wherein, Z hBe the zero-crossing rate statistic of horizontal direction, the zero-crossing rate statistic Z that uses the same method and obtain vertical direction vBoth additions are promptly obtained the zero-crossing rate statistic ZC=Z of image h+ Z v
Above four parameters are averaged weighting, promptly obtain first spatial parameter of image: fuzzy parameter Blur Image, computing formula is as follows:
Blur image = Activity image + Ave _ Gradient + Edge _ Energy + ZC 4 - - - ( 11 )
Entropy Entropy Image:
Monochrome information is found the solution entropy, following formula,
Entropy image = Σ i = 1 L p ( x i ) log 2 1 p ( x i ) - - - ( 12 )
Wherein, L represents the number of the gray scale that occurs, p (x i) represent the probability of this gray scale, the number of times that the calculating of this probability occurs by gray scale obtains the Entropy that above-mentioned formula obtains divided by the total pixel number of image ImageIt is exactly the entropy of entire image;
The characteristic value Block of total blocking effect Image:
Horizontal direction: at first calculate the first-order difference of every row and ask absolute value, as follows to the capable solution formula of i,
f i(j)=|Y(i,j+1)-Y(i,j)| 0≤j≤M-2 (13)
The f that following formula obtains i(j) be that length is the array of M-1, to f i(j) carry out zero padding, make its length become 2 integral number power, obtain f ' i(j), then it is carried out fast fourier transform, and calculate the amplitude spectrum of fourier coefficient, more same operation is carried out in each provisional capital; The amplitude spectrum of the Fourier transform of all row is added up, obtained the array of the frequency domain statistics of a horizontal direction, as shown in the formula
F ( n ) = Σ i = 0 N - 1 FFT ( f i ′ ( j ) ) , 0 ≤ n ≤ L - 1 - - - ( 14 )
Wherein, L is the length after the zero padding;
When computing block effect degree, peak point is carried out medium filtering, the window of filtering is 3 or 5, peak value is deducted the value that obtains behind the medium filtering DC coefficient F (0) divided by amplitude spectrum, promptly obtains reflecting the blocking effect characteristic value B in the horizontal direction of image h
Same method is used for vertical direction obtains the characteristic value B of blocking effect in vertical direction v, the characteristic value that on average is total blocking effect is asked in both additions:
Block image = B h + B v 2 - - - ( 15 )
Frequency domain energy spectrometer Fre_Energy: luminance picture is carried out the frequency domain energy spectrometer obtain image frequency domain energy distributions feature;
Saturation Chrome Image: two chrominance spaces by image calculate this feature of saturation, be calculated as follows, the quadratic sum of two chromatic component U of computed image pixel and V and ask its square root at first, colourity mould value as pixel, the statistics of all mould values that add up again, and, promptly obtain the saturation parameters of image divided by the number of chromatic component; Be shown below,
Chrome image = 1 M UV × N UV Σ i = 0 N UV - 1 Σ j = 0 M UV - 1 U 2 ( i , j ) + V 2 ( i , j ) - - - ( 17 )
Wherein, U (i, j) and V (i is respectively that image i is capable, the pairing colourity value of j row carrier chrominance signal j), M UV, N UVBe carrier chrominance signal at level and vertical direction pixel count order, for four kinds of common chroma, the relation of the value of these two variablees and M, N is as follows:
When sample format is 4:4:4, M UV=M, N UV=N;
When sample format is 4:2:2, M UV = M 2 , N UV=N;
When sample format is 4:1:1, M UV = M 4 , NUV=N;
When sample format is 4:2:0, M UV = M 2 , N UV = N 2 ;
Frame-to-frame differences Diff_Frame:
This feature is found the solution poor based on the monochrome information of front and back two width of cloth images, as shown in the formula,
Diff _ Frame = Σ i = 0 N - 1 Σ j = 0 M - 1 | Y ( i , j , t + 1 ) - Y ( i , j , t ) | M × N - - - ( 18 )
Wherein, variable t is the time shaft parameter of video sequence; This parametric solution of frame-to-frame differences be the average information of the luminance difference absolute value of back one frame and former frame respective pixel.
2. method for evaluating video quality according to claim 1 is characterized in that: the characteristic value Block that calculates total blocking effect Image, when wherein carrying out fast fourier transform, after the capable first-order difference of Q added up, again the array that adds up is carried out fast fourier transform, and calculates the amplitude spectrum of fourier coefficient; Calculate the capable horizontal direction blocking effect characteristic value of this Q; Do same operation to ensuing Q is capable, the value of Q is 8 to 16 any one numerical value, and Using such method is handled the view picture image, and the characteristic value addition of all horizontal directions is asked on average, is blocking effect characteristic value in the horizontal direction.
3. method for evaluating video quality according to claim 1, it is characterized in that: use wavelet transformation to carry out the frequency domain energy spectrometer luminance picture, the small echo that the frequency domain energy spectrometer uses is the W53 small echo, and its high-pass coefficient and low-pass coefficients are respectively { 0.25 ,-0.5,0.25}, { 0.125,0.25,0.75,0.25 ,-0.125}; Amplitude to the wavelet coefficient after the level Four conversion is added up respectively, promptly tries to achieve the quadratic sum with the wavelet amplitude of one-level, and divided by the total sample of this one-level, and the value that obtains is the ENERGY E (Lx) on the frequency band of this one-level correspondence; X wherein is 0,1,2,3,4;
Utilize the non-linear bandpass characteristics of CSF then, the wavelet coefficient of different spaces frequency band after the wavelet decomposition be weighted, the frequency domain energy calculation as shown in the formula, Fre_Energy is and obtains image frequency domain energy distributions feature:
Fre_Energy=2.25×E(L0)+2.87×E(L1)+3.16×E(L2)+2.56×E(L3)+1.00×E(L4) (16)。
CN 200810106132 2008-05-09 2008-05-09 A kind of method for evaluating video quality based on artificial neural net Expired - Fee Related CN100559881C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810106132 CN100559881C (en) 2008-05-09 2008-05-09 A kind of method for evaluating video quality based on artificial neural net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810106132 CN100559881C (en) 2008-05-09 2008-05-09 A kind of method for evaluating video quality based on artificial neural net

Publications (2)

Publication Number Publication Date
CN101282481A CN101282481A (en) 2008-10-08
CN100559881C true CN100559881C (en) 2009-11-11

Family

ID=40014717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810106132 Expired - Fee Related CN100559881C (en) 2008-05-09 2008-05-09 A kind of method for evaluating video quality based on artificial neural net

Country Status (1)

Country Link
CN (1) CN100559881C (en)

Families Citing this family (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742353B (en) * 2008-11-04 2012-01-04 工业和信息化部电信传输研究所 No-reference video quality evaluating method
CN101426150B (en) * 2008-12-08 2011-05-11 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101765026B (en) * 2008-12-24 2011-07-27 中国移动通信集团公司 Correction method and correction system of image quality evaluation values
CN101901482B (en) * 2009-05-31 2012-05-02 汉王科技股份有限公司 Method for judging quality effect of defogged and enhanced image
CN101668226B (en) * 2009-09-11 2011-05-04 重庆医科大学 Method for acquiring color image with best quality
US8823873B2 (en) 2009-10-10 2014-09-02 Thomson Licensing Method and device for calculating blur in video images
CN101877127B (en) * 2009-11-12 2012-04-11 北京大学 Image reference-free quality evaluation method and system based on gradient profile
CN101789091A (en) * 2010-02-05 2010-07-28 上海全土豆网络科技有限公司 System and method for automatically identifying video definition
WO2011134110A1 (en) * 2010-04-30 2011-11-03 Thomson Licensing Method and apparatus for measuring video quality using at least one semi -supervised learning regressor for mean observer score prediction
CN103385000A (en) * 2010-07-30 2013-11-06 汤姆逊许可公司 Method and apparatus for measuring video quality
CN101950418B (en) * 2010-08-26 2012-08-08 北京中创信测科技股份有限公司 Image quality evaluation method and device
CN102137271A (en) * 2010-11-04 2011-07-27 华为软件技术有限公司 Method and device for evaluating image quality
CN102478584B (en) * 2010-11-26 2014-10-15 香港理工大学 Wind power station wind speed prediction method based on wavelet analysis and system thereof
CN102098530A (en) * 2010-12-02 2011-06-15 惠州Tcl移动通信有限公司 Method and device for automatically distinguishing quality of camera module
CN102572501A (en) * 2010-12-23 2012-07-11 华东师范大学 Video quality evaluation method and device capable of taking network performance and video self-owned characteristics into account
CN102075786B (en) * 2011-01-19 2012-10-24 宁波大学 Method for objectively evaluating image quality
CN103609069B (en) * 2011-06-21 2018-07-24 汤姆逊许可公司 Subscriber terminal equipment, server apparatus, system and method for assessing media data quality
CN102202227B (en) * 2011-06-21 2013-02-20 珠海世纪鼎利通信科技股份有限公司 No-reference objective video quality assessment method
CN102231844B (en) * 2011-07-21 2013-04-03 西安电子科技大学 Video image fusion performance evaluation method based on structure similarity and human vision
CN102955947B (en) * 2011-08-19 2017-09-22 北京百度网讯科技有限公司 A kind of device and method thereof for being used to determine image definition
CN102539326B (en) * 2012-01-13 2014-03-12 江苏大学 Method for carrying out quantitative evaluation on soup hue quality of tea
CN103369349B (en) * 2012-03-28 2016-04-27 中国移动通信集团公司 A kind of digital video-frequency quality control method and device thereof
CN102685548B (en) * 2012-05-29 2015-09-30 公安部第三研究所 The nothing ginseng appraisal procedure of video quality
CN102880934A (en) * 2012-09-07 2013-01-16 中国标准化研究院 Integrity evaluation method for food enterprise
CN103354617B (en) * 2013-07-03 2015-03-04 宁波大学 Boundary strength compressing image quality objective evaluation method based on DCT domain
CN103871054B (en) * 2014-02-27 2017-01-11 华中科技大学 Combined index-based image segmentation result quantitative evaluation method
CN103841410B (en) * 2014-03-05 2016-05-04 北京邮电大学 Based on half reference video QoE objective evaluation method of image feature information
CN105338339B (en) * 2014-07-29 2018-02-27 联想(北京)有限公司 Information processing method and electronic equipment
CN105991995B (en) * 2015-02-13 2019-05-31 中国科学院西安光学精密机械研究所 No-reference video quality evaluating method based on the domain 3D-DCT statistical analysis
GB201515142D0 (en) * 2015-08-26 2015-10-07 Quantel Holdings Ltd Determining a quality measure for a processed video signal
CN105205504B (en) * 2015-10-04 2018-09-18 北京航空航天大学 A kind of image attention regional quality evaluation index learning method based on data-driven
US10410330B2 (en) 2015-11-12 2019-09-10 University Of Virginia Patent Foundation System and method for comparison-based image quality assessment
CN105513048A (en) * 2015-11-24 2016-04-20 西安电子科技大学昆山创新研究院 Sub-band-information-entropy-measure-based image quality evaluation method
CN105608700B (en) * 2015-12-24 2019-12-17 广州视源电子科技股份有限公司 Photo screening method and system
CN116468815A (en) * 2016-01-25 2023-07-21 渊慧科技有限公司 Generating images using neural networks
CN105516716B (en) * 2016-01-27 2017-10-27 华东师范大学 The on-the-spot test method of closed loop safety-protection system video image quality
CN105913433A (en) * 2016-04-12 2016-08-31 北京小米移动软件有限公司 Information pushing method and information pushing device
CN106228556B (en) * 2016-07-22 2019-12-06 北京小米移动软件有限公司 image quality analysis method and device
CN106060537B (en) * 2016-08-04 2019-08-13 腾讯科技(深圳)有限公司 A kind of video quality evaluation method and device
CN106383888A (en) * 2016-09-22 2017-02-08 深圳市唯特视科技有限公司 Method for positioning and navigation by use of picture retrieval
CN106651829B (en) * 2016-09-23 2019-10-08 中国传媒大学 A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN108509457A (en) * 2017-02-28 2018-09-07 阿里巴巴集团控股有限公司 A kind of recommendation method and apparatus of video data
CN108665433B (en) * 2017-04-01 2021-05-18 西安电子科技大学 No-reference natural image quality evaluation method combining multiple characteristics
CN107027023B (en) * 2017-04-24 2018-07-13 北京理工大学 Based on the VoIP of neural network without reference video communication quality method for objectively evaluating
CN107360435B (en) * 2017-06-12 2019-09-20 苏州科达科技股份有限公司 Blockiness detection methods, block noise filtering method and device
CN107396094B (en) * 2017-08-17 2019-02-22 上海大学 Automatic testing method towards camera single in multi-cam monitoring system damage
CN109495772B (en) * 2017-09-11 2021-10-15 阿里巴巴(中国)有限公司 Video quality sequencing method and system
CN108230314B (en) * 2018-01-03 2022-01-28 天津师范大学 Image quality evaluation method based on deep activation pooling
CN108289221B (en) * 2018-01-17 2019-08-30 华中科技大学 The non-reference picture quality appraisement model and construction method of rejecting outliers
CN108401150B (en) * 2018-03-22 2019-08-27 浙江科技学院 A kind of compressed sensing reconstruction algorithm statistic of attribute evaluation method of analog vision subjective perception
CN108615231B (en) * 2018-03-22 2020-08-04 浙江科技学院 All-reference image quality objective evaluation method based on neural network learning fusion
CN110163901A (en) * 2019-04-15 2019-08-23 福州瑞芯微电子股份有限公司 A kind of post-processing evaluation method and system
CN109840598B (en) * 2019-04-29 2019-08-09 深兰人工智能芯片研究院(江苏)有限公司 A kind of method for building up and device of deep learning network model
CN110278415B (en) * 2019-07-02 2020-04-28 浙江大学 Method for improving video quality of network camera
CN111061895A (en) * 2019-07-12 2020-04-24 北京达佳互联信息技术有限公司 Image recommendation method and device, electronic equipment and storage medium
CN114584849B (en) * 2019-09-24 2023-05-05 腾讯科技(深圳)有限公司 Video quality evaluation method, device, electronic equipment and computer storage medium
CN110582008B (en) * 2019-09-30 2022-01-21 北京奇艺世纪科技有限公司 Video quality evaluation method and device
CN110677639B (en) * 2019-09-30 2021-06-11 中国传媒大学 Non-reference video quality evaluation method based on feature fusion and recurrent neural network
CN110730037B (en) * 2019-10-21 2021-02-26 苏州大学 Optical signal-to-noise ratio monitoring method of coherent optical communication system based on momentum gradient descent method
CN111127437A (en) * 2019-12-25 2020-05-08 浙江传媒学院 Full-reference image quality evaluation method based on color space decomposition
CN111385567B (en) * 2020-03-12 2021-01-05 上海交通大学 Ultra-high-definition video quality evaluation method and device
CN111711816B (en) * 2020-07-08 2022-11-11 福州大学 Video objective quality evaluation method based on observable coding effect intensity
CN113240249B (en) * 2021-04-26 2022-04-29 泰瑞数创科技(北京)有限公司 Urban engineering quality intelligent evaluation method and system based on unmanned aerial vehicle augmented reality
CN114240849B (en) * 2021-11-25 2023-03-31 电子科技大学 Gradient change-based no-reference JPEG image quality evaluation method
CN114445386B (en) * 2022-01-29 2023-02-24 泗阳三江橡塑有限公司 PVC pipe quality detection and evaluation method and system based on artificial intelligence
CN115097526B (en) * 2022-08-22 2022-11-11 江苏益捷思信息科技有限公司 Seismic acquisition data quality evaluation method

Also Published As

Publication number Publication date
CN101282481A (en) 2008-10-08

Similar Documents

Publication Publication Date Title
CN100559881C (en) A kind of method for evaluating video quality based on artificial neural net
CN100559880C (en) A kind of highly-clear video image quality evaluation method and device based on self-adapted ST area
CN104243973B (en) Video perceived quality non-reference objective evaluation method based on areas of interest
CN104079925B (en) Ultra high-definition video image quality method for objectively evaluating based on vision perception characteristic
Wang et al. Novel spatio-temporal structural information based video quality metric
CN101562675B (en) No-reference image quality evaluation method based on Contourlet transform
CN102523477B (en) Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model
Temel et al. Perceptual image quality assessment through spectral analysis of error representations
Tian et al. A multi-order derivative feature-based quality assessment model for light field image
CN103037217B (en) The image detected in interpolated image damages
CN105049838B (en) Objective evaluation method for compressing stereoscopic video quality
CN105049851A (en) Channel no-reference image quality evaluation method based on color perception
CN101482973A (en) Partial reference image quality appraisement method based on early vision
Li et al. GridSAR: Grid strength and regularity for robust evaluation of blocking artifacts in JPEG images
CN110443800A (en) The evaluation method of video image quality
Lahoulou et al. Full-reference image quality metrics performance evaluation over image quality databases
Dimauro A new image quality metric based on human visual system
CN104574424A (en) No-reference image blur degree evaluation method based on multiresolution DCT edge gradient statistics
CN105844640A (en) Color image quality evaluation method based on gradient
CN107578406A (en) Based on grid with Wei pool statistical property without with reference to stereo image quality evaluation method
CN108776958B (en) Mix the image quality evaluating method and device of degraded image
Fu et al. Image quality assessment using edge and contrast similarity
Guo et al. Gabor difference analysis of digital video quality
Chan et al. A psychovisually-based image quality evaluator for JPEG images
CN102496162B (en) Method for evaluating quality of part of reference image based on non-separable wavelet filter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091111

Termination date: 20120509