CN104376565A - Non-reference image quality evaluation method based on discrete cosine transform and sparse representation - Google Patents
Non-reference image quality evaluation method based on discrete cosine transform and sparse representation Download PDFInfo
- Publication number
- CN104376565A CN104376565A CN201410695579.8A CN201410695579A CN104376565A CN 104376565 A CN104376565 A CN 104376565A CN 201410695579 A CN201410695579 A CN 201410695579A CN 104376565 A CN104376565 A CN 104376565A
- Authority
- CN
- China
- Prior art keywords
- image
- natural scene
- statistical nature
- scene statistical
- discrete cosine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000013441 quality evaluation Methods 0.000 title abstract description 12
- 238000012360 testing method Methods 0.000 claims description 51
- 238000012549 training Methods 0.000 claims description 23
- 230000008859 change Effects 0.000 claims description 20
- 239000000284 extract Substances 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000008447 perception Effects 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 238000005315 distribution function Methods 0.000 claims description 2
- 238000011156 evaluation Methods 0.000 abstract description 18
- 230000000694 effects Effects 0.000 description 9
- 238000001303 quality assessment method Methods 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005562 fading Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000007430 reference method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20052—Discrete cosine transform [DCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a non-reference image quality evaluation method based on discrete cosine transform and sparse representation to mainly solve the problem that in the prior art, non-reference image quality evaluation is not accurate. The method comprises the following steps that a gray level image is input, discrete cosine transform is carried out on the gray level image, and natural scene statistical characteristics are extracted; natural scene statistical characteristics of a series of images of different distortion types and different content are extracted, and an original characteristic dictionary is established according to the average subjective difference score; clustering is carried out on the original characteristics dictionary, and atoms are selected in a self-adaptation mode according to the tested image characteristics and the approximation degrees in the original characteristic dictionary to form a sparse representation dictionary; the tested image characteristics are solved and the sparse representation coefficients are calculated through sparse representation in characteristics space, linear weighting summation is carried out according to the subjective evaluation values in the sparse representation dictionary, and the image quality measure is obtained. The method has good consistency with the subjective evaluation result and is suitable for quality evaluation on images with various distortion types.
Description
Technical field
The invention belongs to image processing field, relate to the objective evaluation of picture quality, can be used for image acquisition, compression coding, Internet Transmission.
Background technology
Image is the important channel of mankind's obtaining information, and picture quality represents that image provides the ability of information to people or equipment, and direct relation adequacy and the accuracy of institute's obtaining information.But image is in the process obtaining, process, transmit and store, and because various factors impact inevitably will produce degradation problems, this brings extreme difficulties to the acquisition of information or the post-processed of image.Therefore, effective image quality evaluation mechanism is set up extremely important.As can be used for Performance comparision, the Selecting parameter of various algorithm in the processing procedure such as image denoising, image co-registration; Can be used for instructing the transmitting procedure of whole image and assessment of system performance at Image Coding and the communications field; In addition, be also significant in scientific domains such as image processing algorithm optimization, living things feature recognitions.
Image quality evaluation can be divided into subjective evaluation method and method for objectively evaluating method.Subjective method relies on the subjective perception of experimenter to carry out assess image quality, and objective method is then according to the quantizating index that model provides, and simulating human visual system perceives mechanism weighs picture quality.Due to the final recipient that people is image, therefore subjective quality assessment is the most reliable evaluation method.Subjective method the most frequently used is at present mean subjective percentile method MOS and difference subjective score method DMOS, but be subject to the impact of subjective factor due to its evaluation result, consuming time and can not automatically realize when amount of images is large, therefore the research of objective image evaluation method is just seemed particular importance.
According to the degree of dependence to reference picture when evaluating, Objective image quality evaluation can be divided into full reference image quality appraisement FR-IQA, partial reference image quality appraisement RR-IQA and non-reference picture quality appraisement NR-IQA.
The great advantage of full reference image quality appraisement FR-IQA method is accurate to distorted image prediction of quality, and full reference method conventional at present has the method MSE and PSNR, the method SSIM of structure based similarity and the method HVS etc. based on human visual system that add up based on pixel error.But all need the priori that original image is complete due to these methods, need to store and the data volume of transmission comparatively large, limit its application in many practical field, therefore partial reference type image quality evaluating method becomes one of focus of people's research.
Partial reference image quality appraisement RR-IQA method does not need complete original reference image, but needs to utilize the Feature Combination of some reference pictures to obtain the massfraction of distorted image.Although evaluation method of can ensuring the quality of products on the basis reducing required transmission information amount possesses good accuracy, but still need transmit the partial information of original image.In most of practical application, the information of original image cannot obtain at all or procurement cost very high.
It is a kind of any prior imformation not needing original image that no-reference image quality evaluates NR-IQA method, directly distorted image is carried out to the method for quality evaluation.Because people understand limited to human visual system and corresponding brain cognitive process at present, the Design and implementation of its algorithm is more difficult.Current non-reference picture quality appraisement method has: " the Z.M.P.Sazzad; Y.Kawayoke; and Y.Horita; No-reference image quality assessment for jpeg2000based on spatial features; Signal Process.Image Commun.; vol.23 of the non-reference picture quality appraisement method for JP2000 distortion that Sazzad etc. propose, no.4, pp.257 – 268, Apr.2008 ", but the method is not only suitable for for JPEG2000 compression and evaluates the impact on image such as fuzzy, noise.Moorthy etc. propose method " the A.K.Moorthyand A.C.Bovik; A two-step framework for constructing blind image quality indices; IEEE SignalProcess.Lett.; vol.17; no.5; pp.513 – 516 based on study, May 2010. "; this method is the mapping relations of direct modeling characteristics of image and quality; its mathematical model and model parameter all need training or manually obtain; because mapping relations can not accurate analog, cause final mass evaluation result not accurate enough.Anush Krishna Moorthy etc. also been proposed the method " the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index " based on natural scene statistics, the method evaluation result and subjective assessment have the consistance of height, but 88 dimensions are reached to the feature that a sub-picture extracts, characteristic extraction procedure is consuming time long, and huge characteristic dimension is also inconvenient to classify and the training of regression model.
Summary of the invention
The object of the invention is to for above-mentioned existing methodical deficiency, propose a kind of non-reference picture quality appraisement method based on discrete cosine transform and rarefaction representation, extracted data message can be made full use of while reduction image zooming-out characteristic dimension, by the method for rarefaction representation to image feature information High Efficiency Modeling, improve the accuracy of quality evaluation result.
Realize the object of the invention technical scheme to comprise the steps:
(1) read in a width gray level image I, discrete cosine transform is carried out to it, extract a series of natural scene statistical nature f be associated with subjective perception;
(2) the primitive character dictionary of training image is set up;
(2a) repeat step (1), extract the natural scene statistical nature of the secondary training image of n, and constitutive characteristic matrix F, F=[f
1, f
2..., f
i..., f
n], wherein f
ibe the natural scene statistical nature of the i-th secondary training image, i=1,2 ..., n;
(2b) integrate the mean subjective discrepancy score of the secondary training image of n, and form quality vector M, M=[m
1, m
2..., m
i..., m
n], wherein m
ibe the mean subjective discrepancy score of the i-th secondary training image, i=1,2 ..., n;
(2c) quality vector M and eigenmatrix F is carried out correspondence to combine, builds primitive character dictionary D:
(3) gather for H class with K-means algorithm by primitive character dictionary D, the cluster centre of kth class is C
k, wherein
Fc
kfor feature clustering center, Mc
kfor clustering quality center, k=1,2 ..., H;
(4) read in a secondary test pattern I', repeat step (1), extract the natural scene statistical nature f' of test pattern I';
(5) the feature clustering center Fc of kth class in the natural scene statistical nature f' of test pattern I' and primitive character dictionary D is calculated
keuclidean distance dis
k, use P
krepresent the degree of approximation of kth class in this test pattern and primitive character dictionary D, wherein
(6) according to degree of approximation P
k, select and feature clustering center Fc from the kth class of primitive character dictionary D
kthe minimum front N of Euclidean distance
kindividual sample is as the rarefaction representation atom of test pattern I'
wherein i=1,2 ..., N, k=1,2 ..., H, H class atom forms the rarefaction representation dictionary D' for test pattern I' jointly:
(7) the rarefaction representation coefficient of natural scene statistical nature f' under eigenmatrix F' of test pattern I' is solved according to the method for rarefaction representation:
wherein argmin represents and will make objective function λ || α ||
1+ || f'-D ' α ||
2minimum α=[α
1, α
2..., α
k..., α
h]
tassignment is to α
*,
k=1,2 ..., H, i=1,2 ..., N
k, R
lrepresent that L ties up real number space,
represent vectorial β=[β
1, β
2..., β
l..., β
l]
t1 norm,
represent 2 norms of vectorial β, l=1,2 ..., L, λ are the normal numbers for balancing fidelity item and regular terms, and T represents matrix transpose operation;
(8) quality of test pattern I' is calculated according to constructed rarefaction representation:
q ∈ [0,100], wherein
for kth class i-th atom f in rarefaction representation dictionary D'
i kmassfraction, α
k,irepresent natural scene statistical nature f' kth class i-th atom f in eigenmatrix F' of test pattern I'
i kexpression coefficient, Q is that the final mass of test pattern I' is estimated;
The present invention compared with prior art has the following advantages:
1. the present invention can make full use of the information of database, carries out dictionary training study, thus can carry out quality evaluation to the image of different type of distortion to the natural scene statistical nature of type of distortion image different in database.With existing great majority only for certain distortion type non-reference picture quality appraisement algorithm compared with, the scope of application is wider.
2. the present invention utilizes the characteristics dictionary trained, without any need for original reference image information, directly carry out Efficient Evaluation to picture quality, compare more convenient with partial reference type image quality evaluating method with full-reference image quality evaluating method, range of application is wider.
3. the present invention to fc-specific test FC image can in the characteristics of image dictionary of having trained the corresponding rarefaction representation dictionary of adaptive selection, compared with other existing algorithms, evaluation result and subjective consistency better.
Accompanying drawing explanation
Fig. 1 is realization flow figure of the present invention;
Fig. 2 is the natural scene statistical nature extracted after carrying out discrete cosine transform to gray level image I in the present invention;
Fig. 3 is the result to primitive character dictionary D K-means cluster in the present invention;
Fig. 4 is to the objective assessment score of test pattern and mean subjective discrepancy score comparison diagram with the present invention.
Embodiment
Below in conjunction with accompanying drawing, specific embodiment of the invention step and effect are described in further detail:
With reference to Fig. 1, performing step of the present invention is as follows:
Step 1, extracts the natural scene statistical nature f of gray level image I on first yardstick
s;
(1a) read in a secondary gray level image I, what gray level image I is decomposed into 5*5 has overlapping image block each other, carries out discrete cosine transform, and remove the DC component of discrete cosine transform coefficient to each image block;
(1b) respectively Generalized Gaussian Distribution Model matching is used to the discrete cosine transform coefficient after the removal DC component of each image block, obtain the form factor γ of each image block, get the average of the form factor γ of all image blocks as the natural scene statistical nature f on first yardstick
s, 1first element f
s, 1,1, to the form factor of all image blocks according to arranging from small to large, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1second element f
s, 1,2, the Generalized Gaussian Distribution Model function wherein calculating form factor γ is:
μ is average, and σ is variance, and γ is form factor, and β is scale factor;
(1c) the frequency change coefficient of each image block is calculated
get the frequency change coefficient of all image blocks
average as the natural scene statistical nature f on first yardstick
s, 1the 3rd element f
s, 1,3, to the frequency change coefficient of all image blocks
according to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick
s,
1the 4th element f
s, Isosorbide-5-Nitrae, wherein
μ
| x|and σ
| x|be respectively average and the standard deviation of the absolute value of the discrete cosine transform coefficient of the removal DC component of each image block, || represent and ask absolute value;
(1d) according to the order from high frequency to low frequency, the discrete cosine transform coefficient of each image block is divided into 3 frequency bands, calculates the frequency band energy rate of change R of each image block
n, be specifically calculated as follows formula:
wherein
represent the variance of the n-th frequency band, n represents each frequency band, E
nrepresent the energy of the n-th frequency band, n=1,2,3, j is positive integer;
(1e) get all image blocks frequency band energy rate of change R
naverage as the natural scene statistical nature f on first yardstick
s, 1the 5th element f
s, 1,5, to the frequency band energy rate of change R of all image blocks
naccording to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1the 6th element f
s, 1,6;
(1f) discrete cosine transform coefficient of each image block is divided into along 45 °, 90 °, the subband in 135 ° of these 3 directions, and with generalized Gaussian distribution function respectively to the subband matching in each direction, obtain the frequency change coefficient of 3 directional subbands, and calculate the variance of the frequency change coefficient of these 3 directional subbands
(1g) variance of the frequency change coefficient of 3 directional subbands of all image blocks is got
average as the natural scene statistical nature f on first yardstick
s, 1the 7th element f
s, 1,7, to the variance of frequency change coefficient of 3 directional subbands of getting all image blocks
according to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1the 8th element f
s, 1,8;
(1h) the 8 dimension natural scene statistical nature fs of gray level image I on first yardstick are obtained
s, 1:
f
s,1=[f
s,1,1,f
s,1,2,…,f
s,1,i…,f
s,1,8],i=1,2,…,8。
Step 2, extracts the natural scene statistical nature f of gray level image I on multiple yardstick;
(2a) down-sampling operation is carried out to gray level image I and obtain down-sampled images I2, step 1 is repeated to down-sampled images I2 and operates, obtain the natural scene statistical nature f of gray level image I on second yardstick
s, 2;
(2b) down-sampling operation is carried out again to down-sampled images I2 and obtain secondary down-sampled images I3, step 1 is repeated to secondary down-sampled images I3 and operates, obtain the natural scene statistical nature f of gray level image I on the 3rd yardstick
s, 3;
(2c) the natural scene statistical nature of gray level image I is tried to achieve: f=[f
s, 1, f
s, 2, f
s, 3]
t, T represents matrix transpose operation.
Step 3, extracts the natural scene statistical nature f' of test pattern I' according to step 1 and step 2;
Step 4, builds primitive character dictionary D;
(4a) in training set, successively step 1 is carried out to every width image and step 2 operates, extract the natural scene statistical nature of the secondary training image of n, and constitutive characteristic matrix F, F=[f
1, f
2..., f
i..., f
n], wherein f
ibe the natural scene statistical nature of the i-th secondary training image, i=1,2 ..., n;
(4b) integrate the mean subjective discrepancy score of the secondary training image of n, and form quality vector M, M=[m
1, m
2..., m
i..., m
n], wherein m
ibe the mean subjective discrepancy score of the i-th secondary training image, i=1,2 ..., n;
(4c) quality vector M and eigenmatrix F is carried out correspondence to combine, builds primitive character dictionary D:
Wherein
Step 5, selects corresponding rarefaction representation dictionary for specific test pattern in characteristics of image dictionary;
(5a) gather for H class with K-means algorithm by primitive character dictionary D, in H class, kth class cluster centre is C
k, wherein
Fc
kfor feature clustering center, Mc
kfor clustering quality center, k=1,2 ... H;
(5b) read in a secondary test pattern I', carry out step 1 successively and step 2 operates, extract the natural scene statistical nature f' of test pattern I';
(5c) the feature clustering center Fc of kth class in the natural scene statistical nature f' of test pattern I' and primitive character dictionary D is calculated
keuclidean distance dis
k, obtain the degree of approximation P of kth class in test pattern I' and primitive character dictionary D
k, wherein
(5d) according to degree of approximation P
k, select and feature clustering center Fc from the kth class of primitive character dictionary D
kthe minimum front N of Euclidean distance
kindividual sample is as the rarefaction representation atom of test pattern I'
wherein i=1,2 ..., N, k=1,2 ..., H, H class atom forms the rarefaction representation dictionary D' for test pattern I' jointly:
Step 6, the feature of rarefaction representation test pattern;
The rarefaction representation coefficient of natural scene statistical nature f' under eigenmatrix F' of test pattern I' is solved according to the method for rarefaction representation:
wherein arg min represents and will make objective function λ || α ||
1+ || f'-D ' α ||
2minimum α=[α
1, α
2..., α
k..., α
h]
tassignment is to α
*,
k=1,2 ..., H, i=1,2 ..., N
k, R
lrepresent that L ties up real number space,
represent vectorial β=[β
1, β
2..., β
l..., β
l]
t1 norm,
represent 2 norms of vectorial β, λ is the normal number for balancing fidelity item and regular terms, and T represents matrix transpose operation.
Step 7, the quality assessment utilizing following formula to calculate test pattern I' is estimated:
q ∈ [0,100], wherein
for kth class i-th atom f in rarefaction representation dictionary D'
i kmassfraction, α
k,irepresent natural scene statistical nature f' kth class i-th atom f in eigenmatrix F' of test pattern I'
i kexpression coefficient, Q is that the final mass of test pattern I' is estimated, and the quality of Q value less explanation test pattern is higher.
Effect of the present invention can be further illustrated by following experiment:
1. experiment condition and standards of grading:
This test carries out on the second generation LIVE image quality measure database of TEXAS university of the U.S., this database comprises the distorted image of 29 high-resolution undistorted RGB color image and five corresponding types, comprises the image of 175 width jpeg images, 169 width JPEG2000 images, 145 width white noise WN images, 145 width Gaussian Blur Gblur images and the distortion after fast-fading FF channel of 145 width.The mean subjective discrepancy score that database gives distorted image describes the quality of distorted image.
In order to test the consistance of picture quality objective evaluation result and subjective perception that the present invention proposes, the following 3 kinds of modules of this experimental selection: is linearly dependent coefficient LCC, reflect the accuracy that method for objectively evaluating is predicted; Two is root-mean-square error RMSE, reflects the error of method for objectively evaluating; Three is Spearman rank order correlation coefficient S ROCC, reflects the monotonicity of objective evaluation prediction of result.
In test, image data base is divided into training plan image set and test pattern image set, and training plan image set is for training primitive character dictionary, and test pattern is used for prediction and evaluation result.The packet mode of employing two kinds is had according to the difference of original image number.First group: training set has 15 width original images; Second group: training set has 23 width original images.
Experiment parameter is arranged: 1) during primitive character dictionary cluster, H gets 4; 2), when forming rarefaction representation dictionary, δ gets 4; 3), when solving rarefaction representation coefficient with rarefaction representation, λ rule of thumb gets 0.0001.
2. experiment content and result:
The evaluation of experiment 1:LIVE_database2 database epigraph distortion
The non-reference picture quality appraisement method based on discrete cosine transform and rarefaction representation utilizing the present invention to propose, use two kinds of packet modes to concentrate the test pattern of LIVE_database2 database and carry out quality evaluation, computed image Objective Quality Assessment result and the conforming 3 kinds of modules of subjective perception respectively: linearly dependent coefficient LCC, Spearman rank order correlation coefficient S ROCC and root-mean-square error RMSE.
Table 1 gives the test and appraisal of the distortion on LIVE_database2 database.Wherein DCTSR1 is the test result that the present invention uses the first packet mode, DCTSR2 is the test result that the present invention uses the second packet mode, wavelet transformation and rarefaction representation associated methods 1 are the test result that wavelet transformation and rarefaction representation associated methods use the first packet mode, and wavelet transformation and rarefaction representation associated methods 2 are the test result that wavelet transformation and rarefaction representation associated methods use the second packet mode.
The comparative test result of table 1 this method and other image quality evaluating methods
As can be seen from Table 1, when using same packets mode, the present invention most ofly has good superiority with existing compared with reference method: 1) have higher forecasting accuracy, namely linearly dependent coefficient LCC is larger than existing most methods; 2) have the monotonicity of stricter prediction, namely rank order correlation coefficient S ROCC is larger than existing most methods; 3) the present invention can predict very well to the image of various type of distortion.
Experiment 2: picture quality objective evaluation result and subjective perception consistance are tested
The quality assessment of the test pattern experimentally obtained in 1 is estimated, draw the loose some graph of a relation that the mean subjective discrepancy score DMOS of Q and test pattern is estimated in the quality assessment of the present invention to test pattern, and use logarithmic function matching to obtain optimum matching curve, as shown in Figure 4.In Fig. 4, the quality assessment of horizontal ordinate representative image is estimated, the mean subjective discrepancy score DMOS of ordinate representative image, ' ' represents the image quality evaluation that the present invention obtains and estimates, and the distribution of ' ' is more close with the optimum matching curve simulated, and illustrates that effect is better.Wherein:
4 (a) is the effect of the present invention in JPEG2000 compression artefacts image word bank,
4 (b) is the effect of the present invention in JPEG compression artefacts image word bank,
4 (c) is the effect of the present invention in white noise WN distorted image word bank,
4 (d) is the effect of the present invention in Gaussian Blur Gblur distorted image word bank,
4 (e) is the effect of the present invention in the image word bank of distortion after fast-fading FF channel,
4 (f) is the effect of the present invention in whole LIVE_database2 database.
As can be seen from Figure 4, relatively, deflection curve is less, and in the different distorted image word banks of whole database, performance is all very stable, higher with subjective vision perception consistance for the distribution of the present invention's ' ' and the optimum matching curve simulated.
Claims (2)
1., based on a non-reference picture quality appraisement method for discrete cosine transform and rarefaction representation, comprise the steps:
(1) read in a width gray level image I, discrete cosine transform is carried out to it, extract a series of natural scene statistical nature f be associated with subjective perception;
(2) the primitive character dictionary of training image is set up;
(2a) repeat step (1), extract the natural scene statistical nature of the secondary training image of n, and constitutive characteristic matrix F, F=[f
1, f
2..., f
i..., f
n], wherein f
ibe the natural scene statistical nature of the n-th secondary training image, i=1,2 ..., n;
(2b) integrate the mean subjective discrepancy score of the secondary training image of n, and form quality vector M, M=[m
1, m
2..., m
i..., m
n], wherein m
ibe the mean subjective discrepancy score of the n-th secondary training image, i=1,2 ..., n;
(2c) quality vector M and eigenmatrix F is carried out correspondence to combine, builds primitive character dictionary D:
Wherein
(2d) gather for H class with K-means algorithm by primitive character dictionary D, in H class, kth class cluster centre is C
k, wherein
Fc
kfor feature clustering center, Mc
kfor clustering quality center, k=1,2...H.
(3) read in a secondary test pattern I', repeat step (1), extract the natural scene statistical nature f' of test pattern I';
(4) the feature clustering center Fc of kth class in the natural scene statistical nature f' of test pattern I' and primitive character dictionary D is calculated
keuclidean distance dis
k, obtain the degree of approximation P of kth class in test pattern I' and primitive character dictionary D
k, wherein
k=1,2,...,H;
(5) according to degree of approximation P
k, select and feature clustering center Fc from the kth class of primitive character dictionary D
kthe minimum front N of Euclidean distance
kindividual sample is as the rarefaction representation atom of test pattern I'
wherein i=1,2 ..., N, k=1,2 ..., H, H class atom forms the rarefaction representation dictionary D' for test pattern I' jointly:
Wherein
for the natural scene statistical nature of i-th sample in the kth class selected from primitive character dictionary D,
for the mean subjective discrepancy score of the sample of i-th in kth class, i=1,2 ..., N
k, N
k=δ * P
k, k=1,2 ..., H, δ are normal number;
(6) the rarefaction representation coefficient of natural scene statistical nature f' under eigenmatrix F' of test pattern I' is solved according to the method for rarefaction representation:
wherein argmin represents and will make objective function λ || α ||
1+ || f'-D ' α ||
2minimum α=[α
1, α
2..., α
k..., α
h]
tassignment is to α
*,
k=1,2 ..., H, i=1,2 ..., N
k, R
lrepresent that L ties up real number space,
Represent vectorial β=[β
1, β
2..., β
l..., β
l]
t1 norm,
Represent 2 norms of vectorial β, λ is the normal number for balancing fidelity item and regular terms, and T represents matrix transpose operation;
(7) quality of test pattern I' is calculated according to constructed rarefaction representation:
q ∈ [0,100], wherein
for kth class i-th atom in rarefaction representation dictionary D'
massfraction, α
k,irepresent natural scene statistical nature f' kth class i-th atom in eigenmatrix F' of test pattern I'
expression coefficient, Q is that the final mass of test pattern I' is estimated.
2. the non-reference picture quality appraisement method based on discrete cosine transform and rarefaction representation according to claim 1, wherein described in step (1), discrete cosine transform is carried out to gray level image I, extract a series of natural scene statistical nature f be associated with subjective perception, concrete steps are as follows:
(1.1) the natural scene statistical nature f of gray level image I on first yardstick is extracted
s, 1;
(1.1a) what gray level image I is decomposed into 5*5 has overlapping image block each other, carries out discrete cosine transform, and remove the DC component of discrete cosine transform coefficient to each image block;
(1.1b) respectively Generalized Gaussian Distribution Model matching is used to the discrete cosine transform coefficient after the removal DC component of each image block, obtain the form factor γ of each image block, get the average of the form factor γ of all image blocks as the natural scene statistical nature f on first yardstick
s, 1first element f
s, 1,1, to the form factor of all image blocks according to arranging from small to large, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1second element f
s, 1,2, the Generalized Gaussian Distribution Model function wherein calculating form factor γ is:
μ is average, and σ is variance, and γ is form factor, and β is scale factor;
(1.1c) the frequency change coefficient of each image block is calculated
get the frequency change coefficient of all image blocks
average as the natural scene statistical nature f on first yardstick
s, 1the 3rd element f
s, 1,3, to the frequency change coefficient of all image blocks
according to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1the 4th element f
s, Isosorbide-5-Nitrae, wherein
μ
| x|and σ
| x|be respectively average and the standard deviation of the discrete cosine transform coefficient of the removal DC component of each image block, || represent and ask absolute value;
(1.1d) according to the order from high frequency to low frequency, the discrete cosine transform coefficient of each image block is divided into 3 frequency bands, calculates the frequency band energy rate of change R of each image block
n, be specifically calculated as follows formula:
(1.1e) get all image blocks frequency band energy rate of change R
naverage as the natural scene statistical nature f on first yardstick
s, 1the 5th element f
s, 1,5, to the frequency band energy rate of change R of all image blocks
naccording to arranging from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1the 6th element f
s, 1,6;
(1.1f) discrete cosine transform coefficient of each image block is divided into along 45 °, 90 °, the subband in 135 ° of these 3 directions, and with generalized Gaussian distribution function respectively to the subband matching in each direction, obtain the frequency change coefficient of 3 directional subbands, and calculate the variance of the frequency change coefficient of these 3 directional subbands
(1.1g) variance of the frequency change coefficient of 3 directional subbands of all image blocks is got
average as the natural scene statistical nature f on first yardstick
s, 1the 7th element f
s, 1,7, to the variance of frequency change coefficient of 3 directional subbands of getting all image blocks
according to order arrangement from big to small, get the average of front 10% as the natural scene statistical nature f on first yardstick
s, 1the 8th element f
s, 1,8;
(1.1h) the 8 dimension natural scene statistical nature fs of gray level image I on first yardstick are obtained
s, 1:
f
s,1=[f
s,1,1,f
s,1,2,...,f
s,1,i...,f
s,1,8],i=1,2,...,8。
(1.2) down-sampling operation is carried out to gray level image I and obtain down-sampled images I2, step (1.1) operation is repeated to down-sampled images I2, obtains the natural scene statistical nature f of gray level image I on second yardstick
s, 2;
(1.3) down-sampling operation is carried out again to down-sampled images I2 and obtain secondary down-sampled images I3, step (1.1) operation is repeated to secondary down-sampled images I3, obtains the natural scene statistical nature f of gray level image I on the 3rd yardstick
s, 3;
(1.4) the natural scene statistical nature of gray level image I is tried to achieve: f=[f
s, 1, f
s, 2, f
s, 3]
t, T represents matrix transpose operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410695579.8A CN104376565B (en) | 2014-11-26 | 2014-11-26 | Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410695579.8A CN104376565B (en) | 2014-11-26 | 2014-11-26 | Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104376565A true CN104376565A (en) | 2015-02-25 |
CN104376565B CN104376565B (en) | 2017-03-29 |
Family
ID=52555455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410695579.8A Expired - Fee Related CN104376565B (en) | 2014-11-26 | 2014-11-26 | Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104376565B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105005990A (en) * | 2015-07-02 | 2015-10-28 | 东南大学 | Blind image quality evaluation method based on distinctive sparse representation |
CN105007488A (en) * | 2015-07-06 | 2015-10-28 | 浙江理工大学 | Universal no-reference image quality evaluation method based on transformation domain and spatial domain |
WO2016146038A1 (en) * | 2015-03-13 | 2016-09-22 | Shenzhen University | System and method for blind image quality assessment |
CN106127234A (en) * | 2016-06-17 | 2016-11-16 | 西安电子科技大学 | The non-reference picture quality appraisement method of feature based dictionary |
CN106997585A (en) * | 2016-01-22 | 2017-08-01 | 同方威视技术股份有限公司 | Imaging system and image quality evaluating method |
CN107194912A (en) * | 2017-04-20 | 2017-09-22 | 中北大学 | The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation |
CN107798282A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | Method and device for detecting human face of living body |
CN108805850A (en) * | 2018-06-05 | 2018-11-13 | 天津师范大学 | A kind of frame image interfusion method merging trap based on atom |
CN110308397A (en) * | 2019-07-30 | 2019-10-08 | 重庆邮电大学 | A kind of lithium battery multiclass fault diagnosis modeling method of mixing convolutional neural networks driving |
CN111145150A (en) * | 2019-12-20 | 2020-05-12 | 中国科学院光电技术研究所 | Universal non-reference image quality evaluation method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102122353A (en) * | 2011-03-11 | 2011-07-13 | 西安电子科技大学 | Method for segmenting images by using increment dictionary learning and sparse representation |
CN102722712A (en) * | 2012-01-02 | 2012-10-10 | 西安电子科技大学 | Multiple-scale high-resolution image object detection method based on continuity |
CN102945552A (en) * | 2012-10-22 | 2013-02-27 | 西安电子科技大学 | No-reference image quality evaluation method based on sparse representation in natural scene statistics |
US20140072209A1 (en) * | 2012-09-13 | 2014-03-13 | Los Alamos National Security, Llc | Image fusion using sparse overcomplete feature dictionaries |
-
2014
- 2014-11-26 CN CN201410695579.8A patent/CN104376565B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102122353A (en) * | 2011-03-11 | 2011-07-13 | 西安电子科技大学 | Method for segmenting images by using increment dictionary learning and sparse representation |
CN102722712A (en) * | 2012-01-02 | 2012-10-10 | 西安电子科技大学 | Multiple-scale high-resolution image object detection method based on continuity |
US20140072209A1 (en) * | 2012-09-13 | 2014-03-13 | Los Alamos National Security, Llc | Image fusion using sparse overcomplete feature dictionaries |
CN102945552A (en) * | 2012-10-22 | 2013-02-27 | 西安电子科技大学 | No-reference image quality evaluation method based on sparse representation in natural scene statistics |
Non-Patent Citations (5)
Title |
---|
ANISH MITTAL等: "No-Reference Image Quality Assessment in the Spatial Domain", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
MICHAL AHARON等: "K-SVD:An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 * |
MICHELE A.SAAD等: "Blind Image Quality Assessment:A Natural Scene Statistics Approach in the DCT Domain", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
徐健等: "基于聚类的自适应图像稀疏表示算法及其应用", 《光子学报》 * |
高飞等: "主动特征学习及其在盲图像质量评价中的应用", 《计算机学报》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016146038A1 (en) * | 2015-03-13 | 2016-09-22 | Shenzhen University | System and method for blind image quality assessment |
US10909409B2 (en) | 2015-03-13 | 2021-02-02 | Shenzhen University | System and method for blind image quality assessment |
US10331971B2 (en) | 2015-03-13 | 2019-06-25 | Shenzhen University | System and method for blind image quality assessment |
CN105005990B (en) * | 2015-07-02 | 2017-11-28 | 东南大学 | A kind of blind image quality evaluating method based on distinctiveness rarefaction representation |
CN105005990A (en) * | 2015-07-02 | 2015-10-28 | 东南大学 | Blind image quality evaluation method based on distinctive sparse representation |
CN105007488A (en) * | 2015-07-06 | 2015-10-28 | 浙江理工大学 | Universal no-reference image quality evaluation method based on transformation domain and spatial domain |
US10217204B2 (en) * | 2016-01-22 | 2019-02-26 | Nuctech Company Limited | Imaging system and method of evaluating an image quality for the imaging system |
CN106997585A (en) * | 2016-01-22 | 2017-08-01 | 同方威视技术股份有限公司 | Imaging system and image quality evaluating method |
CN106127234B (en) * | 2016-06-17 | 2019-05-03 | 西安电子科技大学 | Non-reference picture quality appraisement method based on characteristics dictionary |
CN106127234A (en) * | 2016-06-17 | 2016-11-16 | 西安电子科技大学 | The non-reference picture quality appraisement method of feature based dictionary |
CN107798282A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | Method and device for detecting human face of living body |
CN107798282B (en) * | 2016-09-07 | 2021-12-31 | 北京眼神科技有限公司 | Method and device for detecting human face of living body |
CN107194912A (en) * | 2017-04-20 | 2017-09-22 | 中北大学 | The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation |
CN107194912B (en) * | 2017-04-20 | 2020-12-29 | 中北大学 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
CN108805850A (en) * | 2018-06-05 | 2018-11-13 | 天津师范大学 | A kind of frame image interfusion method merging trap based on atom |
CN110308397B (en) * | 2019-07-30 | 2021-04-02 | 重庆邮电大学 | Lithium battery multi-class fault diagnosis modeling method driven by hybrid convolutional neural network |
CN110308397A (en) * | 2019-07-30 | 2019-10-08 | 重庆邮电大学 | A kind of lithium battery multiclass fault diagnosis modeling method of mixing convolutional neural networks driving |
CN111145150A (en) * | 2019-12-20 | 2020-05-12 | 中国科学院光电技术研究所 | Universal non-reference image quality evaluation method |
CN111145150B (en) * | 2019-12-20 | 2022-11-11 | 中国科学院光电技术研究所 | Universal non-reference image quality evaluation method |
Also Published As
Publication number | Publication date |
---|---|
CN104376565B (en) | 2017-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104376565A (en) | Non-reference image quality evaluation method based on discrete cosine transform and sparse representation | |
CN105208374B (en) | A kind of non-reference picture assessment method for encoding quality based on deep learning | |
CN103996192B (en) | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model | |
CN101378519B (en) | Method for evaluating quality-lose referrence image quality base on Contourlet transformation | |
He et al. | Sparse representation for blind image quality assessment | |
CN102209257B (en) | Stereo image quality objective evaluation method | |
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN102945552A (en) | No-reference image quality evaluation method based on sparse representation in natural scene statistics | |
CN102547368B (en) | Objective evaluation method for quality of stereo images | |
CN103517065B (en) | Method for objectively evaluating quality of degraded reference three-dimensional picture | |
CN106203444B (en) | Classification of Polarimetric SAR Image method based on band wave and convolutional neural networks | |
CN103366378B (en) | Based on the no-reference image quality evaluation method of conditional histograms shape coincidence | |
CN105574901B (en) | A kind of general non-reference picture quality appraisement method based on local contrast pattern | |
CN104036502B (en) | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology | |
CN108053396B (en) | No-reference evaluation method for multi-distortion image quality | |
CN105049851A (en) | Channel no-reference image quality evaluation method based on color perception | |
CN107040775B (en) | A kind of tone mapping method for objectively evaluating image quality based on local feature | |
CN104851098A (en) | Objective evaluation method for quality of three-dimensional image based on improved structural similarity | |
CN115063492B (en) | Method for generating countermeasure sample for resisting JPEG compression | |
CN105160667A (en) | Blind image quality evaluation method based on combining gradient signal and Laplacian of Gaussian (LOG) signal | |
CN101562675A (en) | No-reference image quality evaluation method based on Contourlet transform | |
CN104318545A (en) | Foggy weather polarization image quality evaluation method | |
CN103258326B (en) | A kind of information fidelity method of image quality blind evaluation | |
CN109816646A (en) | A kind of non-reference picture quality appraisement method based on degeneration decision logic | |
Tang et al. | Training-free referenceless camera image blur assessment via hypercomplex singular value decomposition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170329 |
|
CF01 | Termination of patent right due to non-payment of annual fee |