CN107945154A - Color image quality evaluation method based on quaternary number discrete cosine transform - Google Patents
Color image quality evaluation method based on quaternary number discrete cosine transform Download PDFInfo
- Publication number
- CN107945154A CN107945154A CN201711101994.6A CN201711101994A CN107945154A CN 107945154 A CN107945154 A CN 107945154A CN 201711101994 A CN201711101994 A CN 201711101994A CN 107945154 A CN107945154 A CN 107945154A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- msup
- msubsup
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 19
- 238000011156 evaluation Methods 0.000 claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims abstract description 21
- 230000003595 spectral effect Effects 0.000 claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 5
- 238000012876 topography Methods 0.000 claims description 5
- 238000005303 weighing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 230000006835 compression Effects 0.000 abstract description 3
- 238000007906 compression Methods 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 abstract description 3
- 230000005540 biological transmission Effects 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000002474 experimental method Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 239000000654 additive Substances 0.000 description 3
- 230000000996 additive effect Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- TVZRAEYQIKYCPH-UHFFFAOYSA-N 3-(trimethylsilyl)propane-1-sulfonic acid Chemical compound C[Si](C)(C)CCCS(O)(=O)=O TVZRAEYQIKYCPH-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20052—Discrete cosine transform [DCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses a kind of color image quality evaluation method based on quaternary number discrete cosine transform, mainly solves the problems, such as that existing color image quality objective evaluation and subjective assessment uniformity be not high.Its implementation includes:1) the color triple channel information of coloured image is represented using quaternionic matrix;2) coefficient of image is obtained using quaternary number discrete cosine transform;3) obtained quaternary number color spectral coefficient is divided into by different subbands according to spatial frequency position;4) similitude between original image and distorted image subband is calculated, and picture quality is obtained by Weighted Fusion.The present invention calculates simple, test result indicates that, the present invention will carry out disposed of in its entirety using quaternary number to the triple channel of coloured image, more effectively, exactly quality evaluation can be carried out to coloured image, available for the processing in the compression, storage, transmission of coloured image to coloured image.
Description
Technical field
The invention belongs to color image processing field, particularly a kind of color image quality evaluation method, can be used for
To the processing of coloured image in color image compression, storage, transmission.
Background technology
With the arrival in big data epoch, just explosive growth has been presented in unprecedented speed to multimedia messages.It is color
Color image can carry out objective world true and lively retouch as one of most widely used medium in multimedia messages
State, compared with gray level image, the information of coloured image carrying is more vivid, abundant.In order to make coloured image obtain in practice more
Good application, in the field of many close ties of living with us, as in terms of biomedical aspect, video multimedia, security protection supervises
The fields such as prosecutor face play its important function extensively, and quantitative evaluation is made to the quality of coloured image, have critically important application
Value.
For coloured image, it is colored mostly that the initial quality evaluation algorithm by gray scale domain, which expands to color gamut,
The R of image, G, B color channels separate, and then utilize a certain quality for gray scale area image to comment in each passage respectively
Valency algorithm is calculated, and finally R, G, the result of channel B are carried out linear superposition, form evaluation measurement.But the vision pair of the mankind
The perception of color of image is not to be simply superimposed, and the evaluation index so obtained also experiences pole with human subject and is not consistent.Cause
This, scholars start otherness, extraction and the color change height correlation characterizing different color with rational mathematical model
Global or local feature, simulation human eye to the partial function of color-aware the problems such as expansion explore, advance cromogram image quality
Measure the development of evaluation.
In these color image quality evaluation methods, quaternary number is a kind of fast and effeciently progress Color Image Processing
Instrument.Kolaman et al. borrows the concept of space of quaternions in mathematics, and any pixel point R, G, B in coloured image tri- is logical
The value in road is expressed with a real quaternary number, recycles the formula of mathematical of its vector space, fixed by structural similarity
Correlation between adopted pixel, so that the quality of prognostic chart picture.Wang et al. utilizes probability distribution, by analyzing image R, G, B
The local variance variable of passage, first constructs the coefficient of quaternary number, further according to the structural information of quaternionic matrix phenogram picture, most
Matrix is decomposed afterwards, the evaluation measurement using the similitude between singular value as picture quality.Zhang et al. is based on QTMs
(Quaternion Tchebichefoment) proposes a kind of color image quality evaluation method joined entirely, passes through QTMs degree
The distortion of color of image and structure is measured, author considers that distortions of the QTMs to high quality is insensitive, ladder in this process
Degree and brightness are used to do complementary features, and picture quality is obtained eventually through weighted sum pondization.
But the above method is not due to accounting for influence of the chromatic distortion to coloured image quaternary number spectral characteristic, therefore
Chromatic distortion cannot effectively be measured, and existing most of color image quality evaluation methods are in performance indicator
Still there is relatively large distance with actual use demand.
The content of the invention
It is an object of the invention in view of the above shortcomings of the prior art, propose that one kind is based on quaternary number discrete cosine transform
Color image quality evaluation method, with using quaternionic matrix characterization coloured image triple channel value, and combine quaternary number from
Cosine transform is dissipated, quality evaluation more effectively, more accurately is carried out to coloured image.
To achieve the above object, technical scheme includes as follows:
(1) it is X*Y's to construct size respectively using the color triple channel information of original image and distorted imageWithQuaternionic matrix,Represent the quaternionic matrix of original image,Represent the quaternary of distorted image
Matrix number, X, Y represent the length and width of image respectively, and x, y represent the position of pixel place image;
(2) the local quaternary number discrete cosine transform of 8*8 is carried out to original image and distorted image, obtains original graph respectively
The quaternary number spectral coefficient of pictureWith the quaternary number spectral coefficient of distorted image
Wherein, R represents original image, and D represents distorted image, and L represents regional area, and (r, s) represents pixel in Local map
As position in the block, r represents the row in pixel topography, and s represents row of the pixel in topography, μqIt is pure quaternion,
Meet μq 2=-1,Represent the image local block when carrying out local quaternary number discrete cosine transform to original image,Represent the image local block when carrying out local quaternary number discrete cosine transform to distorted image, X, Y are represented respectively
The length and width of image,WithIt is defined as follows:
It is defined as follows:
(3) coefficient for extracting quaternary number spectral coefficient same position forms 64 quaternary number subbands;
(4) using equation below subband similitude QDSS is exchanged to quantify quaternary number between original image and distorted imagem,n
(x, y) QDSS similar with quaternary number direct current subband0,0(x,y):
Wherein, m, n represent subband position,WithThe subband of original image and distorted image is represented respectively
The quaternary number Local standard deviation of m, n,Represent the covariance of original image and distorted image direct current subband, C be more than
100 are less than 1000 constant;
(5) the quaternary number similarity of original image and distorted image different sub-band is weighted fusion, it is final to obtain
Quality evaluation value Q:
Wherein, wm,nIt is the weight parameter that each passage influences distortion-aware, w is obtained by gaussian weighing functionm,n,
The standard deviation of Gaussian function is
The present invention has the following advantages:
1) evaluation result is more accurate
Existing technology does not account for influence of the chromatic distortion to coloured image quaternary number spectral characteristic, therefore cannot be right
Chromatic distortion is effectively measured.The present invention carries out original image and distorted image the local quaternary number discrete cosine of 8*8
Conversion, gets the quaternary number spectral coefficient of original image and distorted image, chromatic distortion is weighed out using spectral coefficient respectively
Influence to coloured image quaternary number spectral characteristic, can more accurately evaluate the quality of coloured image.
2) evaluation result is more reasonable
Existing image quality evaluation algorithm majority is directed to gray level image and designs, due to gray level image pixel
Matter is scalar, if this kind of algorithm is directly applied to chromatic distortion image, obtained evaluation result differs very with subjective perception
Far.For the present invention according to the R of image, tri- passages of G, B, represent the colour element of image, and utilize using a pure quaternion
The vectorial property of quaternary number defines the similitude between original image and distorted image pixel, so as to evaluate the quality of distorted image
It is more reasonable.
3) invention calculates simple, has higher uniformity with subjective quality assessment, makes full use of the vector of coloured image special
Property carries out it quality evaluation, more efficient with respect to traditional algorithm, more accurate.
Brief description of the drawings
Fig. 1 be the present invention realize flow chart;
Fig. 2 is 6 kinds of different types of cross-color images on TID2013 databases;
Fig. 3 is to the objective evaluation result of all chromatic distortion images on TID2013 databases and subjectivity using the present invention
The comparison diagram of evaluation result.
Embodiment
This example is divided into four parts:Part I is to represent that original image and three color of distorted image lead to using quaternionic matrix
Road value;Part II is to carry out 8*8 part quaternary number long-lost cosine codes to original image and distorted image, obtains image
Quaternary number spectral coefficient;Part III is that the quaternary number coefficient of frequency for extracting different masses same position identical frequency forms 64
Quaternary number subband, takes different similarity calculating methods to obtain ac coefficient and direct current between original image and distorted image
The subband similitude of coefficient;Part IV is that the similitude of all subbands is weighted fusion, calculates original image and distortion
The subband similitude of image, obtains final quality evaluation value.
With reference to Fig. 1, step is as follows for of the invention realizing:
Step 1. quaternary number defines the colour element in original image and distorted image.
Quaternary number, be by Irish mathematician's Hamilton 1843 invention mathematical concept, it is simple supercomplex.
Quaternary number q includes a real number and three imaginary bits, its expression is as follows:
Q=a+bi+cj+dk,
Wherein a, b, c, d belong to real number, and i, j, k represents three imaginary units, while meets i2=j2=k2=-1.
Since coloured image is by r, tri- passage compositions of g, b, are X* using color triple channel information structuring size therefore
The original image quaternionic matrix of YWith the quaternionic matrix of distorted imageIts step is as follows:
(1.1) the colour element q of an original image is represented respectively using a pure quaternionR(x, y) and distorted image
Colour element qD(x,y):
qR(x, y)=rR(x,y)i+gR(x,y)j+bR(x,y)k
qD(x, y)=rD(x,y)i+gD(x,y)j+bD(x,y)k
Wherein, the position of image, q where x, y represent pixelR(x, y) represents to be located at the four of (x, y) position in original image
First number, rR(x, y) represents the pixel for being located at (x, y) position in the r passages of original image, gR(x, y) represents that the g of original image leads to
It is located at the pixel of (x, y) position, b in roadR(x, y) represents the pixel for being located at (x, y) position in the b passages of original image;qD(x,
Y) the quaternary number for being located at (x, y) position in distorted image, r are representedD(x, y) represents to be located at (x, y) in the r passages of distorted image
The pixel of position, gD(x, y) represents the pixel for being located at (x, y) position in the g passages of distorted image, bD(x, y) represents distortion map
It is located at the pixel of (x, y) position in the b passages of picture;
(1.2) the quaternary number q of each position in original image is utilizedREach position in (x, y) and distorted image
Quaternary number qD(x, y), forms original image quaternionic matrixWith distorted image quaternionic matrix
Wherein X represents the length of image, and Y represents the width of image, and the size Expressing of image is X*Y.
Step 2. quaternary number discrete cosine transform.
For coloured image, it is insufficient that three passages of coloured image, which are carried out separating processing, with traditional method.
Therefore, discrete cosine transform is expanded to quaternary number discrete cosine transform by the present invention, its step is as follows:
(2.1) to the quaternionic matrix of original imageLocal quaternary number long-lost cosine code is carried out, is obtained original
The quaternary number pixel points of image
Wherein, X, Y represent the length and width of image, μ respectivelyqIt is pure quaternion, meets μq 2=-1,Represent right
Original image carries out image local block during local quaternary number discrete cosine transform;
(2.2) to the quaternionic matrix of distorted imageLocal quaternary number long-lost cosine code is carried out, obtains distortion
The quaternary number pixel points QDCT of imageq D:
Wherein, X, Y represent the length and width of image, μ respectivelyqIt is pure quaternion, meets μq 2=-1,Represent right
Distorted image carries out image local block during local quaternary number discrete cosine transform;
WithIt is defined as follows:
It is defined as follows:
Step 3. calculates quaternary number subband similitude.
(3.1) original image and distorted image are divided into the rectangular block of nonoverlapping 8*8;
(3.2) to every piece of progress quaternary number discrete cosine transform, different masses are extracted, with the quaternary number system array of same position
Into 64 quaternary number subbands;
(3.3) the quaternary number similitude between original image and distorted image respective sub-bands is asked for;
Since image fault has different influences to the coefficient of quaternary number subband direct current component and the coefficient of AC portion, this
Example takes different calculation formula to quantify the subband similitude QDSS of ac coefficient between original image and distorted imagem,n
The subband similitude QDSS of (x, y) and DC coefficient0,0(x,y):
Wherein, m, n represent subband position, represent the position of direct current subband when m=0, n=0, m ≠ 0, n ≠ 0 when
Wait the position for representing exchange subband.Represent the quaternary number Local standard deviation of the subband m, n of original image,Table
Show the quaternary number Local standard deviation of the subband m, n of distorted image,Represent original image and distorted image direct current subband
Covariance, in order to ensure that denominator is not zero, C is the constant less than 1000 more than 100.
Step 4. quality evaluation.
The subband similitude of the subband similitude of obtained quaternary number ac coefficient and quaternary number DC coefficient is added
Power fusion and Modulus of access, obtain final quality evaluation value Q:
Wherein, wm,nIt is the weight parameter that each passage influences distortion-aware, w0,0What is represented is direct current subband to distortion
The weight parameter of sensation influence, wm,nWhat (m ≠ 0, n ≠ 0) represented is to exchange the weight parameter that subband influences distortion-aware, this
Invention obtains w by gaussian weighing functionm,n, the standard deviation of Gaussian function isThe scope of Q is [0,1], is as a result more connect
The quality of nearly 1 representative image is better.
Advantages of the present invention can be further illustrated by following experiment:
One, experiment conditions
This experiment has selected database TID2013 and LIVE database to be tested, 6 in wherein TID2013 kind colour
Distortion constitutes TID2013 (C) again, as shown in Fig. 2, six kinds of distortions are respectively additive noise (Additive noise), jpeg
Compress (jpeg compression), color quantizing (color quantization), quantizing noise (quantization
Noise), color saturation changes (change of color saturation), color error ratio (chromatic
aberrations).Table 1 represents 6 kinds of chromatic distortions and the corresponding numbering of every kind of distortion in TID2013 databases, such as adds
Property noise (Additive noise) can be expressed as #2 distortions.
6 kinds of chromatic distortions in 1 TID2013 databases of table
The evaluation index that the present invention chooses is Pearson linearly dependent coefficient PLCC and Spearman rank order correlations system
Number ROCC.PLCC values and the reflection of ROCC values are objective evaluation result and the correlation of subjective evaluation result, PLCC values with
ROCC values are higher, represent that objective evaluation result more meets subjective evaluation result, can more illustrate the validity of algorithm.
Wherein xiRepresent the objective predicted value the i-th width image, pass through the prediction obtained after nonlinear fitting Function Mapping
Subjective scores value, yiFor actual subjective assessment fraction,WithThe prediction subjective scores and reality of all test images are represented respectively
The average value of the subjective scores on border, n represent the total number of test image;rxiThe corresponding subjective evaluation fraction of image is pressed in expression
After small or ascending order sequence is arrived greatly, the ranking ranking residing for the fraction of the i-th width image, ryiIt is then objective prediction
Fraction is by identical regularly arranged rear corresponding name sequence number.
Two, experiment contents
With of the invention and existing newest a variety of image quality evaluating methods in TID 2013, TID2013 (C) and LIVE
Distorted image is evaluated on database, and compares their PLCC values and ROCC values.
Experiment 1
With of the invention and existing newest seven kinds of image quality evaluating methods DSS, iCID, CID, QSSIM, S-SSIM,
FSIM and GMSD evaluates distorted image on 2013 databases of TID and compares their PLCC values respectively, as a result such as table 2.
The CC values of the algorithm of the invention and other of table 26 kinds of chromatic distortions on TID2013 databases
From table 2 it can be seen that result of the present invention in #2 and #22 distortions is better than other methods, in the colored mistake of this 6 class
The PLCC values of higher are very achieved, as shown in Figure 3.
Objective evaluating results and subjective evaluation result of the wherein Fig. 3 (a) for the present invention on #7 distorted images, Fig. 3 (b)
The objective evaluating result and subjective evaluation result that are the present invention on #18 distorted images, Fig. 3 (c) are the present invention at No. #22
Objective evaluating result and subjective evaluation result on distorted image, Fig. 3 (d) are objective on #23 distorted images for the present invention
Evaluation result and subjective evaluation result, the abscissa each put in Fig. 3 are objective evaluation as a result, namely Q, ordinate are image
Subjective evaluation result, namely mos values, curve matching effect is better, more illustrates algorithm objective evaluation result and subjective assessment knot
Fruit is consistent.As seen from Figure 3, objective evaluation result of the present invention in four kinds of distortions meets subjective evaluation result.
In conclusion the present invention evaluation accuracy on compared with control methods, have in 6 kinds of chromatic distortions significantly
Improve, achieve the result more consistent with subjective evaluation result.
Experiment 2, with of the invention and existing newest seven kinds of image quality evaluating method SSIM, VSSIM, UCIF, CID,
QSSIM, CIEDE, DE2000 evaluate distorted image and compare on TID 2013, TID2013 (C) and LIVE databases respectively
Compared with their PLCC values and ROCC values, as a result such as table 3.
The algorithm of the invention and other of table 3 is in TID2013, TID2013 (C), the evaluation result on LIVE databases
From table 3 it can be seen that the present invention is achieved preferably as a result, in LIVE numbers in TID2013 and TID2013 (C)
According to being somewhat inferior to VSSIM methods on storehouse, but compared to other methods, achieve comparatively ideal result.
In conclusion the opposite existing method of the present invention is consistent with preferably prediction for coloured image distortion data storehouse
Property, show that the subjective perception matching degree of quality evaluation and the mankind of the present invention to chromatic distortion image is higher.
Claims (3)
1. a kind of color image quality evaluation method based on quaternary number discrete cosine transform, including:
(1) it is X*Y's to construct size respectively using the color triple channel information of original image and distorted imageWithQuaternionic matrix,Represent the quaternionic matrix of original image,Represent the quaternary of distorted image
Matrix number, X, Y represent the length and width of image respectively, and x, y represent the position of pixel place image;
(2) the local quaternary number discrete cosine transform of 8*8 is carried out to original image and distorted image, obtains original image respectively
Quaternary number spectral coefficientWith the quaternary number spectral coefficient of distorted image
<mrow>
<msup>
<msub>
<mi>QDCT</mi>
<mi>q</mi>
</msub>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>r</mi>
<mo>,</mo>
<mi>s</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>r</mi>
<mi>X</mi>
</msubsup>
<msubsup>
<mi>&alpha;</mi>
<mi>s</mi>
<mi>Y</mi>
</msubsup>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>X</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>y</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>Y</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>&mu;</mi>
<mi>q</mi>
</msub>
<msubsup>
<mi>I</mi>
<mi>q</mi>
<mrow>
<mi>L</mi>
<mi>R</mi>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
<mi>X</mi>
</msubsup>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>s</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mi>Y</mi>
</msubsup>
</mrow>
<mrow>
<msup>
<msub>
<mi>QDCT</mi>
<mi>q</mi>
</msub>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>r</mi>
<mo>,</mo>
<mi>s</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>r</mi>
<mi>X</mi>
</msubsup>
<msubsup>
<mi>&alpha;</mi>
<mi>s</mi>
<mi>Y</mi>
</msubsup>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>X</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>y</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>Y</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>&mu;</mi>
<mi>q</mi>
</msub>
<msubsup>
<mi>I</mi>
<mi>q</mi>
<mrow>
<mi>L</mi>
<mi>D</mi>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
<mi>X</mi>
</msubsup>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>s</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mi>Y</mi>
</msubsup>
</mrow>
Wherein, R represents original image, and D represents distorted image, and L represents regional area, and (r, s) represents pixel in topography's block
In position, r represents the row in pixel topography, and s represents row of the pixel in topography, μqIt is pure quaternion, meets
μq 2=-1,Represent the image local block when carrying out local quaternary number discrete cosine transform to original image,Represent the image local block when carrying out local quaternary number discrete cosine transform to distorted image, X, Y are represented respectively
The length and width of image,WithIt is defined as follows:
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>r</mi>
<mi>X</mi>
</msubsup>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msqrt>
<mfrac>
<mn>1</mn>
<mi>X</mi>
</mfrac>
</msqrt>
<mi>r</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msqrt>
<mfrac>
<mn>2</mn>
<mi>X</mi>
</mfrac>
</msqrt>
<mi>r</mi>
<mo>&NotEqual;</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>r</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
<mi>X</mi>
</msubsup>
<mo>=</mo>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mo>&lsqb;</mo>
<mfrac>
<mi>&pi;</mi>
<mi>X</mi>
</mfrac>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mo>)</mo>
</mrow>
<mi>r</mi>
<mo>&rsqb;</mo>
<mo>;</mo>
</mrow>
It is defined as follows:
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>s</mi>
<mi>Y</mi>
</msubsup>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msqrt>
<mfrac>
<mn>1</mn>
<mi>Y</mi>
</mfrac>
</msqrt>
<mi>s</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msqrt>
<mfrac>
<mn>2</mn>
<mi>Y</mi>
</mfrac>
</msqrt>
<mi>s</mi>
<mo>&NotEqual;</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<msubsup>
<mi>&beta;</mi>
<mrow>
<mi>s</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mi>Y</mi>
</msubsup>
<mo>=</mo>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mo>&lsqb;</mo>
<mfrac>
<mi>&pi;</mi>
<mi>Y</mi>
</mfrac>
<mrow>
<mo>(</mo>
<mi>y</mi>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mo>)</mo>
</mrow>
<mi>s</mi>
<mo>&rsqb;</mo>
<mo>;</mo>
</mrow>
(3) coefficient for extracting quaternary number spectral coefficient same position forms 64 quaternary number subbands;
(4) using equation below subband similitude QDSS is exchanged to quantify quaternary number between original image and distorted imagem,n(x,y)
QDSS similar with quaternary number direct current subband0,0(x,y):
<mrow>
<msub>
<mi>QDSS</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
<mi>R</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
<mi>D</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>+</mo>
<mi>C</mi>
</mrow>
<mrow>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
<mi>R</mi>
</msubsup>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
<mi>D</mi>
</msubsup>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msub>
<mo>+</mo>
<mi>C</mi>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>QDSS</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mi>R</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mi>D</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>+</mo>
<mi>C</mi>
</mrow>
<mrow>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mi>R</mi>
</msubsup>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mi>D</mi>
</msubsup>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msub>
<mo>+</mo>
<mi>C</mi>
</mrow>
</mfrac>
<mo>*</mo>
<mfrac>
<mrow>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mrow>
<mi>R</mi>
<mi>D</mi>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>+</mo>
<mi>C</mi>
</mrow>
<mrow>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mi>R</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>Q</mi>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mn>0</mn>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mi>D</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>+</mo>
<mi>C</mi>
</mrow>
</mfrac>
</mrow>
Wherein, m, n represent subband position,WithThe subband m of original image and distorted image is represented respectively, n's
Quaternary number Local standard deviation,Represent the covariance of original image and distorted image direct current subband, C is to be less than more than 100
1000 constant;
(5) the quaternary number similarity of original image and distorted image different sub-band is weighted fusion, to obtain final matter
Measure evaluation of estimate Q:
<mrow>
<mi>Q</mi>
<mo>=</mo>
<mo>|</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>7</mn>
</munderover>
<msub>
<mi>w</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<msub>
<mi>QDSS</mi>
<mrow>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
</mrow>
Wherein, wm,nIt is the weight parameter that each passage influences distortion-aware, w is obtained by gaussian weighing functionm,n, Gauss
The standard deviation of function is
2. according to the method described in claim 1, wherein it is using the color triple channel information structuring size of original image in (1)
The original image quaternionic matrix of X*YCarry out as follows:
Quaternary number is expressed as by (1a):
Q=a+bi+cj+dk
Wherein, a, b, c, d represent the coefficient of four elements respectively, and value is real number, i, j, and k represents three imaginary units, at the same time
Meet i2=j2=k2=-1;
(1b) according to the r of original image, tri- passages of g, b, the colour of an original image is represented using a pure quaternion
Pixel, the pure quaternion represent as follows:
qR(x, y)=rR(x,y)i+gR(x,y)j+bR(x, y) k,
Wherein, the position of image, q where x, y represent pixelR(x, y) represents the quaternary number for being located at (x, y) position in original image,
rR(x, y) represents the pixel for being located at (x, y) position in the r passages of original image, gR(x, y) represents the g passage middle positions of original image
Pixel in (x, y) position, bR(x, y) represents the pixel for being located at (x, y) position in the b passages of original image;
(1c) utilizes the quaternary number q of each position in original imageR(x, y), forms original image quaternionic matrixIt can represent as follows:
<mrow>
<msubsup>
<mi>I</mi>
<mi>q</mi>
<mi>R</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>,</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>,</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>,</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>,</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<mi>Y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>,</mo>
<mi>Y</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>,</mo>
<mi>Y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Wherein X represents the length of original image, and Y represents the width of original image, and the size Expressing of original image is X*Y.
3. according to the method described in claim 1, wherein it is using the color triple channel information structuring size of distorted image in (1)
The distorted image quaternionic matrix of X*YCarry out as follows:
Quaternary number is expressed as by (1d):
Q=a+bi+cj+dk
Wherein, a, b, c, d represent the coefficient of four elements respectively, and value is real number, i, j, and k represents three imaginary units, at the same time
Meet i2=j2=k2=-1;
(1e) according to the r of distorted image, tri- passages of g, b, the colour of a distorted image is represented using a pure quaternion
Pixel, the pure quaternion represent as follows:
qD(x, y)=rD(x,y)i+gD(x,y)j+bD(x,y)k
Wherein, the position of image, q where x, y represent pixelD(x, y) represents the quaternary number for being located at (x, y) position in distorted image,
rD(x, y) represents the pixel for being located at (x, y) position in the r passages of distorted image, gD(x, y) represents the g passage middle positions of distorted image
Pixel in (x, y) position, bD(x, y) represents the pixel for being located at (x, y) position in the b passages of distorted image;
(1f) utilizes the quaternary number q of each position in distorted imageD(x, y), forms distorted image quaternionic matrix
<mrow>
<msubsup>
<mi>I</mi>
<mi>q</mi>
<mi>D</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>,</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>,</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>,</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>,</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>,</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>,</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<mi>Y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>,</mo>
<mi>Y</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>q</mi>
<mi>D</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>,</mo>
<mi>Y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Wherein X represents the length of distorted image, and Y represents the width of distorted image, and the size Expressing of distorted image is X*Y.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711101994.6A CN107945154A (en) | 2017-11-10 | 2017-11-10 | Color image quality evaluation method based on quaternary number discrete cosine transform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711101994.6A CN107945154A (en) | 2017-11-10 | 2017-11-10 | Color image quality evaluation method based on quaternary number discrete cosine transform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107945154A true CN107945154A (en) | 2018-04-20 |
Family
ID=61933698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711101994.6A Pending CN107945154A (en) | 2017-11-10 | 2017-11-10 | Color image quality evaluation method based on quaternary number discrete cosine transform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107945154A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191431A (en) * | 2018-07-27 | 2019-01-11 | 天津大学 | High dynamic color image quality evaluation method based on characteristic similarity |
CN111246205A (en) * | 2020-02-04 | 2020-06-05 | 淮阴师范学院 | Image compression method based on directional double-quaternion filter bank |
CN115484354A (en) * | 2022-09-14 | 2022-12-16 | 姜川 | Color image compression method based on quaternion matrix singular value decomposition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102547368A (en) * | 2011-12-16 | 2012-07-04 | 宁波大学 | Objective evaluation method for quality of stereo images |
CN104010189A (en) * | 2014-05-28 | 2014-08-27 | 宁波大学 | Objective video quality assessment method based on chromaticity co-occurrence matrix weighting |
CN104063864A (en) * | 2014-06-26 | 2014-09-24 | 上海交通大学 | Image fuzziness assessment method based on quaternary phase congruency model |
CN105741328A (en) * | 2016-01-22 | 2016-07-06 | 西安电子科技大学 | Shot image quality evaluation method based on visual perception |
WO2016145571A1 (en) * | 2015-03-13 | 2016-09-22 | 深圳大学 | Method for blind image quality assessment based on conditional histogram codebook |
-
2017
- 2017-11-10 CN CN201711101994.6A patent/CN107945154A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102547368A (en) * | 2011-12-16 | 2012-07-04 | 宁波大学 | Objective evaluation method for quality of stereo images |
CN104010189A (en) * | 2014-05-28 | 2014-08-27 | 宁波大学 | Objective video quality assessment method based on chromaticity co-occurrence matrix weighting |
CN104063864A (en) * | 2014-06-26 | 2014-09-24 | 上海交通大学 | Image fuzziness assessment method based on quaternary phase congruency model |
WO2016145571A1 (en) * | 2015-03-13 | 2016-09-22 | 深圳大学 | Method for blind image quality assessment based on conditional histogram codebook |
CN105741328A (en) * | 2016-01-22 | 2016-07-06 | 西安电子科技大学 | Shot image quality evaluation method based on visual perception |
Non-Patent Citations (2)
Title |
---|
AMNON BALANOV等: "IMAGE QUALITY ASSESSMENT BASED ON DCT SUBBAND SIMILARITY", 《2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
WEI FENG,BO HU: "Quaternion Discrete Cosine Transform and its Application in Color Template Matching", 《2008 CONGRESS ON IMAGE AND SIGNAL PROCESSING》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191431A (en) * | 2018-07-27 | 2019-01-11 | 天津大学 | High dynamic color image quality evaluation method based on characteristic similarity |
CN111246205A (en) * | 2020-02-04 | 2020-06-05 | 淮阴师范学院 | Image compression method based on directional double-quaternion filter bank |
CN115484354A (en) * | 2022-09-14 | 2022-12-16 | 姜川 | Color image compression method based on quaternion matrix singular value decomposition |
CN115484354B (en) * | 2022-09-14 | 2024-02-23 | 姜川 | Color image compression method based on quaternion matrix singular value decomposition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103996192B (en) | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model | |
CN105208374B (en) | A kind of non-reference picture assessment method for encoding quality based on deep learning | |
CN109410261B (en) | Monocular image depth estimation method based on pyramid pooling module | |
CN107027023B (en) | Based on the VoIP of neural network without reference video communication quality method for objectively evaluating | |
CN104361593B (en) | A kind of color image quality evaluation method based on HVS and quaternary number | |
CN101950422B (en) | Singular value decomposition(SVD)-based image quality evaluation method | |
CN104134204B (en) | Image definition evaluation method and image definition evaluation device based on sparse representation | |
CN108428227A (en) | Non-reference picture quality appraisement method based on full convolutional neural networks | |
CN109858461A (en) | A kind of method, apparatus, equipment and storage medium that dense population counts | |
CN101562675B (en) | No-reference image quality evaluation method based on Contourlet transform | |
CN104376565B (en) | Based on discrete cosine transform and the non-reference picture quality appraisement method of rarefaction representation | |
CN106709958A (en) | Gray scale gradient and color histogram-based image quality evaluation method | |
CN107464222B (en) | Based on tensor space without reference high dynamic range images method for evaluating objective quality | |
CN107105223B (en) | A kind of tone mapping method for objectively evaluating image quality based on global characteristics | |
CN108052980A (en) | Air quality grade detection method based on image | |
CN102421007A (en) | Image quality evaluating method based on multi-scale structure similarity weighted aggregate | |
CN107945154A (en) | Color image quality evaluation method based on quaternary number discrete cosine transform | |
CN104866871B (en) | Hyperspectral image classification method based on projection structure sparse coding | |
CN104572538A (en) | K-PLS regression model based traditional Chinese medicine tongue image color correction method | |
CN108053396A (en) | A kind of more distorted image quality without with reference to evaluation method | |
CN109191428A (en) | Full-reference image quality evaluating method based on masking textural characteristics | |
CN107396095A (en) | One kind is without with reference to three-dimensional image quality evaluation method | |
CN106548472A (en) | Non-reference picture quality appraisement method based on Walsh Hadamard transform | |
CN103745466A (en) | Image quality evaluation method based on independent component analysis | |
CN106412571A (en) | Video quality evaluation method based on gradient similarity standard deviation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180420 |