CN104361593B - A kind of color image quality evaluation method based on HVS and quaternary number - Google Patents

A kind of color image quality evaluation method based on HVS and quaternary number Download PDF

Info

Publication number
CN104361593B
CN104361593B CN201410650245.9A CN201410650245A CN104361593B CN 104361593 B CN104361593 B CN 104361593B CN 201410650245 A CN201410650245 A CN 201410650245A CN 104361593 B CN104361593 B CN 104361593B
Authority
CN
China
Prior art keywords
image
evaluated
function
singular value
quaternion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410650245.9A
Other languages
Chinese (zh)
Other versions
CN104361593A (en
Inventor
李勃
陈惠娟
于海峰
吴炜
赵鹏
张宇澄
何玉婷
许宗平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201410650245.9A priority Critical patent/CN104361593B/en
Publication of CN104361593A publication Critical patent/CN104361593A/en
Application granted granted Critical
Publication of CN104361593B publication Critical patent/CN104361593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of color image quality evaluation method based on HVS and quaternary number, belong to image procossing and technical field of computer vision.The present invention step be:First, the mathematics appraisal of original reference image and distorted image to be evaluated is built by analyzing human-eye visual characteristic:The function of spatial position Q of imageL, local variance QV, texture fringe complexity function QTEWith color function QC;2nd, the quaternionic matrix of construction original reference image and distorted image to be evaluated, and the singular value feature vector that singular value decomposition obtains image is carried out to quaternionic matrix;3rd, the euclidean distance metric image fault degree of original reference image and the singular value feature vector of distorted image to be evaluated is utilized.Human-eye visual characteristic and quaternary number are combined by the present invention, extract brightness and the chrominance information of image, function of spatial position, texture fringe complexity function and local variance are constructed using human-eye visual characteristic, evaluation result is more consistent with the effect of human eye perceptual image.

Description

Color image quality evaluation method based on HVS and quaternion
Technical Field
The invention relates to the technical field of image processing and computer vision, in particular to a method for evaluating the quality of a color image by constructing a mathematical model consistent with an image observed by human eyes by using the characteristics of a human eye vision system and combining quaternion singular value decomposition.
Background
The image quality is one of important parameters in the field of image processing and computer vision, and with the development of computer science and technology, the requirements on the image quality in the aspects of printing, ceramic tiles, images, image retrieval and the like are higher and higher, but image distortion and image degradation problems of different degrees are generated in the processes of image acquisition, processing, compression, transmission, display and the like.
Humans are the ultimate recipients of images, making their subjective quality assessment (DMOS) of images considered most reliable. The subjective quality evaluation is to make quality evaluation and score the visual perception effect of the target image to be evaluated according to the subjective perception experience of an observer or some evaluation standards uniformly specified in advance, and then to perform weighted average on the scores of all the observers, wherein the obtained result is the subjective quality score of the image. However, subjective image quality evaluation is time-consuming and labor-consuming, is greatly affected by an observer, an image type, and a surrounding environment, and is weak in real-time performance. Therefore, people are constantly dedicated to research objective image quality evaluation methods capable of correctly, timely and effectively reflecting subjective visual perception of people. The objective image quality evaluation is to utilize an algorithm, a mathematical model and the like to perform timely and rapid feedback on the image quality so as to obtain an evaluation result consistent with the subjective feeling of people. The method is various, and the classification method is different due to different entry points and basic ideas. The objective quality evaluation method is classified into 3 types of full reference type, partial reference type and no reference type according to the reference of the original image. The full reference type is suitable for encoder design and performance comparison of different encoders, and the partial reference type and the no-reference type are suitable for multimedia application with limited bandwidth. Because the full reference type can utilize all information of the original image, the evaluation result of the image is more consistent with human subjective evaluation.
The Peak Signal Noise Ratio (PSNR) and Mean Square Error (MSE) proposed by Liu A et al in 2012 in IEEE Transactions on Image processing are the most classical full-reference objective Image Quality evaluation methods. PSNR reflects the Fidelity (Fidelity) of the image to be evaluated, while MSE reflects the dissimilarity (Diversity) between the image to be evaluated and the original image. The theories of the two methods are simple and clear, easy to understand and convenient to calculate, but the theories only consider the comparison of all the pixel points of the image and do not consider the possible structural relationship and the like among all the pixel points of the image, and the structural relationship and the like have deviation from the true viewing of human eyes.
Z Wang et al, in 2004, published in IEEE Transactions on Image Processing, propose that SSIM algorithm comprehensively compares differences between three types of different information, namely brightness, contrast and structural similarity, of an original undistorted Image and an Image to be evaluated, and considers the structural relationship between pixels, but has the problems of poor detail grasp under severe blurring conditions, difficult index parameter determination and the like.
The Gradient amplitude based similarity bias algorithm GMSD proposed in A high elevation effect probability Image quality index, published by Xue et al in 2013 on IEEE Transactions on Image Processing, takes into account that gradients are highly sensitive to Image distortion, but the Processing of color images must be converted to the grayscale domain. The evaluation of the color image by the method needs to be converted into a gray image, and the evaluation result has deviation from the actual condition seen by human eyes.
Through search, the Chinese patent application No. 200610027433.1, the application date is 6/8/2006, the name of the invention creation is: an image quality evaluation method based on supercomplex singular value decomposition; the method directly models the color image by using the hypercomplex number (quaternion), extracts the inherent energy characteristics of the color image by using the singular value decomposition of the hypercomplex number, constructs a distortion mapping matrix by using the distance between the original image and the singular value of the distortion image, and evaluates the quality of the color image by using the distortion mapping matrix. The Chinese patent application No. 201210438606.4, the application date is 2012, 11 and 6, the name of the invention is: the application of the color image quality evaluation algorithm takes the chromaticity, the brightness and the saturation of an image as imaginary parts of quaternions respectively, quaternion matrixes of a reference image and an image to be evaluated are constructed, singular value decomposition is carried out on the quaternion matrixes and the image to be evaluated respectively to obtain singular value characteristic vectors, finally, the gray relevance is applied to calculate the relevance between the singular value characteristic vectors of the reference image and the singular value characteristic vectors of the image to be evaluated, and the greater the relevance, the better the quality of the image to be evaluated is. However, the evaluation results obtained in the above applications still have a large deviation from the actual conditions seen by human eyes, and the evaluation method of color image quality still needs to be further optimized.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention provides a color image quality evaluation method based on HVS and quaternion, aiming at overcoming the problem that the deviation between an evaluation result and the actual condition seen by human eyes is larger because a color image needs to be converted into a gray image for processing when an evaluation model is constructed by a traditional evaluation method; the invention provides a method for extracting brightness and chrominance information of an image by combining human visual characteristics and quaternion, and constructing a spatial position function, a texture edge complexity function and a local variance by using the human visual characteristics, so that the energy characteristics of the image are extracted by using quaternion singular value decomposition in order to improve the traditional method of cutting R, G, B three channels, and the evaluation result is more consistent with the effect of human perception of the image.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention relates to a color image quality evaluation method based on HVS and quaternion, which comprises the following steps:
step one, analyzing human eyesightThe method comprises the steps of constructing a mathematical evaluation model of an original reference image and a distorted image to be evaluated by using perceptual characteristics, wherein the mathematical evaluation model comprises a spatial position function Q of the imageLLocal variance QVTexture edge complexity function QTEAnd a color function QC
Step two, Q is addedL、QV、QTEAs the imaginary part of a quaternion, QCRespectively constructing quaternion matrixes of an original reference image and a distorted image to be evaluated as a real part of a quaternion, and performing singular value decomposition on the quaternion matrixes to obtain singular value eigenvectors of the image;
and step three, measuring the image distortion degree by utilizing Euclidean distances of singular value feature vectors of the original reference image and the distorted image to be evaluated.
Furthermore, the specific process of constructing the mathematical evaluation model in the step one is as follows:
(1) acquiring RGB tristimulus values of an original reference image and a distorted image to be evaluated;
(2) extracting the spatial position information of the original reference image and the distorted image to be evaluated, and constructing a spatial position function QLAnd texture edge complexity function QTE
(3) Converting an original reference image and a distorted image to be evaluated from an RGB space into a YUV color space, extracting image brightness information and constructing a local variance QVExtracting image brightness and chroma information to construct color function QC
Further, step one utilizes the foveal properties of the human visual system to construct the spatial location function QLSaid spatial position function
In the formula, eLPixel point (i, j) to image central image for human eye visual observationDistance of prime point (M/2, N/2) andquotient of (d); e.g. of the typecIs a constant.
Further, step one constructs a texture edge complexity function Q using the masking effect of the human visual systemTESaid texture edge complexity function
QTE=QT×QE
In the formula, QTIs the texture complexity function, Q, of the pixel point (i, j)EIs the edge complexity function of pixel point (i, j).
Further, step one constructs the local variance Q using multi-channel properties of the human visual systemVThe local variance of
Wherein, non-overlapping blocks are divided according to the brightness component of the image to obtain Ii,jL is an image partition Ii,jPixel η contained thereinpThe number of the (c) is,
further, the color function
QC=αQL+βQU
In the formula, QLAs luminance information of the image, QUα and β are the weight of the luminance and chrominance, respectively, as chrominance information of an image.
Further, the Euclidean distance in step three
In the formula, λiIs a singular value feature vector of an original reference image,and K is the singular value eigenvector of the distorted image to be evaluated, and is the minimum value of the eigenvalues of the two singular value eigenvectors, namely the minimum value of the two quaternion matrix ranks:
3. advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
(1) the invention discloses a color image quality evaluation method based on HVS and quaternion, which constructs a space position function Q of an original reference image and a distorted image to be evaluated by analyzing the visual characteristics of human eyesLLocal variance QVTexture edge complexity function QTEAnd a color function QCThe four parts of image information are integrated through quaternion, energy characteristics of the image are obtained through singular value decomposition, the traditional three-channel method of splitting R, G, B is improved, the integrity of color information is well guaranteed, and the extracted image information contains global and local information, so that the evaluation result can represent all information of the image more completely;
(2) according to the color image quality evaluation method based on the HVS and the quaternion, the visual characteristics of human eyes and the quaternion are combined, the evaluation result is more consistent with the effect of human eyes on perceiving images, and the color image quality evaluation method is superior to the traditional SSIM and other typical image quality evaluation algorithms.
Drawings
FIG. 1 is a flow chart of an algorithm of a color image quality evaluation method based on HVS and quaternion according to the present invention;
FIG. 2 is an equivalent diagram of the foveal characteristic of the human visual system of the present invention;
FIG. 3 is a diagram of the fitting result of the subjective quality of the image with the quality evaluation method, the conventional method, according to the present invention; wherein, (a) in fig. 3 is a graph of nonlinear fitting of PSNR and DMOS values, fig. 3 (b) is a graph of nonlinear fitting of SSIM and DMOS values, fig. 3 (c) is a graph of nonlinear fitting of MS-SSIM and DMOS values, fig. 3 (d) is a graph of nonlinear fitting of SVD and DMOS values, fig. 3 (e) is a graph of nonlinear fitting of GMSD and DMOS values, and fig. 3 (f) is a graph of nonlinear fitting of the quality evaluation method of the present invention and DMOS values;
fig. 4 (a) to (e) are graphs comparing non-linear fitting curves of HVS-QSVD, GMSD, SSIM and DMOS values of five sets of images with different distortion types.
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example 1
With reference to fig. 1, the present embodiment provides a color image quality evaluation method based on HVS and quaternion, aiming at the problem that when an evaluation model is constructed by a conventional evaluation method, a color image needs to be converted into a grayscale image for processing, so that an evaluation result does not conform to human eye perception. In the embodiment, the image energy characteristics conforming to the human eye perception are obtained by combining the human eye visual characteristics with the quaternion. Experiments show that the evaluation of the color image is superior to other methods, and the evaluation result is more consistent with the image perceived by human eyes. The image quality evaluation method of this embodiment will be described in detail below with reference to the experimental results:
the method comprises the following steps of firstly, constructing a mathematical evaluation model of an original reference image and a distorted image to be evaluated by analyzing human eye visual characteristics:
the visual characteristics of human eyes, which are provided on the basis of understanding the physiological structure of the human visual system, are closely related to how human eyes observe the external environment and the image. Since the person is the final recipient of the image, the evaluation result is consistent with what the human eye actually sees, and the present embodiment constructs a corresponding mathematical model by analyzing the visual characteristics of the human eye.
The human eye is similar to a variable focal length convex lens, but is affected by the complex structure of the human brain, unlike a general convex lens. In general, human visual characteristics include foveal characteristics, visual multichannel characteristics, visual nonlinearity, contrast sensitivity, and masking effects. The embodiment builds a mathematical evaluation model from the characteristics of the fovea medialis, the visual multichannel and the masking effect.
(1) Function of spatial position
The foveal characteristic of the human visual system means that when an image appears, the central position information of the image can be firstly observed by human eyes, and especially the position change information of the texture edge near the center of the image is easy to be sensed by the human eyes.
The human eye will see the center of the image first and then spread all around, and the human eye should treat equally at points all around at the same distance from the center. As shown in fig. 2, assuming that the center O is the center of the image, the distances from the points on the circle to the center are equal, the probability that A, B, C, D four points are observed by the human eye is the same, and E and F are also the same.
This example is based on the literature (CHEN T, WU H R. space variant media filters for thermal analysis of impulse noise corrected images [ J]IEEE Transactions on circuits and Systems II: Analog and Digital Signal Processing,2001,48(8):784 and 789) used in accordance with the foveal nature of the human visual systemThe exact formula shows how the spatial resolution affects the human eye viewing the image. The specific process comprises the steps of firstly obtaining RGB tristimulus values of an original reference image and a distorted image to be evaluated, then extracting the spatial position information of the original reference image and the distorted image to be evaluated, and respectively constructing the spatial position function Q of the original reference image and the distorted image to be evaluatedL
In the formula, eLThe distance from the pixel point (i, j) for visual observation of human eyes to the central pixel point (M/2, N/2) of the image is divided by the distance from the first pixel point (0,0) of the image to the central pixel point of the imageecIs a constant determined according to the test result, and after the test, the embodiment sets ecIs 0.6.
(2) Texture edge complexity function
The masking effect of the human visual system means that phenomena that could otherwise be noticed are neglected due to the presence of other phenomena. In different regions, the masking effect of the human visual system can be reflected by the respective weight ratios, which is more consistent with the characteristics of the image observed by the human eye.
The embodiment extracts the texture feature information and the edge feature information of the image and obtains the texture edge complexity function Q of the imageTE,QTEThe larger the texture, the simpler the texture is, the more attention is paid to human eyes, and the larger the influence of the human eyes on the image quality is; on the contrary, QTESmaller means more complex textures and more easily ignored by the human eye. The specific calculation process is as follows:
the gradient direction is first calculated:
wherein theta (i, j) represents the gradient direction of the pixel point (i, j),andand (3) representing the horizontal and vertical gradient values of the pixel point (i, j). And calculating corresponding image edge characteristic information by using a Sobel edge detection operator, normalizing the edge information, and recording the normalized edge information as E (i, j). The gradient direction is divided into the following regions in the range of [0,360 °), as shown in the following equation:
θ(i,j)∈{0°,180°,45°,225°,90°,270°,135°,315°} (3)
where 0 ° and 180 °,45 ° and 225 °,90 ° and 270 °,135 ° and 315 ° are symmetric about the origin, respectively, i.e. there are 4 different directional regions.
Calculating the complexity of the texture:
let a1The number of direction types, i.e., the number of types of θ (i, j), a2The number of edge points, i.e., the number of pixels having an E (i, j) ═ 1 is calculated. When a is2Smaller than a set threshold value, a20, otherwise a2The threshold value was set to 40 by experimental testing as 1. Therefore, we apply the texture complexity function Q of a certain pixel point (i, j) in the imageTIs defined as follows:
edge complexity:
first, three vectors P ═ 1,0,2,0,1, L ═ 1,4,6,4,1, and E ═ 1, -2,0,2,1 are defined. Wherein, P represents a 'point' feature descriptor, L represents a 'line' feature descriptor, E represents an 'edge' feature descriptor, and 6 operators can be obtained by using the 3 operatorsMasking: l isT×E,LT×P,ET×L,ET×P,PT×L,PT× E. let f be the responses of the 6 masks at a certain pixel point (i, j) in the imagei,j(LT×E),fi,j(LT×P),fi,j(ET×L),fi,j(ET×P),fi,j(PT×L),fi,j(PT× E), the edge complexity of the pixel point (i, j) is:
texture edge complexity function for pixel (i, j):
QTE=QT×QE(6)
the larger the calculation result is, the weaker the masking effect is, the simpler the texture is, and the human eye can see clearly, namely the visual effect on the human eye is stronger.
(3) Local variance
The multi-channel nature of the human visual system means that the human eye observes images in different channels, and only general outlines can be distinguished when the resolution is low, and detailed information can be distinguished when the resolution is high. The detail information of the image can be represented by the local variance of the image, so the local variance of the image is used as a means for describing and analyzing the content information of the image, and some important structural information of the image can be summarized by the local variance distribution of the image.
By QVThe variance of the local region of the image I centered on the pixel point (I, j), i.e., the local variance, is represented. The embodiment first needs to convert the original reference image and the distorted image to be evaluated from the RGB space to the YUV color space, and calculate the local variance by using the Y (representative luminance) classification. Adopting a sliding window to carry out non-overlapping blocking on the Y component of the image to obtain the variance of each block, namely the local variance of the image. For each image block Ii,jComprising L pixels, using ηpEach pixel point inside the local variance can be expressed as:
wherein,is divided into blocks Ii,jIs measured.
Since the size and mode of each sub-block affect the structure of the image, the size and mode of each sub-block are related to the image structure included in Ii,jPixel points η within a tilepThe literature (Z Wang, Z Bovik, et al. image quality assessment from experimental similarity [ J ]]The Gaussian weighting method mentioned in IEEE Transactions on image processing, 2004,13(4): 600-:
block mean value:
block local variance:
in the formula XpIs pixel point ηpThe number of (2).
(4) Color information
Hue, saturation, and brightness are three attributes of color, also referred to as three elements of color. They are inherent characteristics of color and are different from each other. Hue and saturation may be represented by chroma. The only feature of a grayscale image is luminance, while a color image also has chrominance features.
The brightness is a physical quantity, which is the feeling of human light intensity and reflects the intensity of light emitted (reflected) from the surface of a luminous body (reflector). Hue refers to the general tendency of picture color in a picture, and is a large color effect. The saturation is also referred to as color purity, and refers to the degree of vividness of a color, and represents the ratio of color components contained in the color. The saturation of the colors increases with the increase of the color ratio, and is directly related to the light irradiation condition and the surface structure of the shot object. Since hue and saturation can be represented by chromaticity uniformity, the present embodiment represents essential attributes of color using brightness and chromaticity.
The sensitivity of the human visual system to brightness is higher than that to chroma, and the present embodiment uses a weighting method to represent the color information of an image, that is, for different color images, the weight proportion of brightness and chroma is different, and the specific calculation relationship is as follows:
QC=αQL+βQU(10)
wherein Q isLAs luminance information of the image, QUFor the chrominance information of the image, α and β are the proportion of the luminance and the chrominance respectively, and the α and β are respectively the best 1.063 and 0.937 through the test of the experimental result.
Step two, respectively constructing quaternion matrixes of the original reference image and the distorted image to be evaluated, and performing singular value decomposition on the quaternion matrixes to obtain singular value eigenvectors of the image:
(a) quaternion
In 1843, hamilton (w.r. hamilton) a british mathematician created quaternions. A quaternionContains 4 parts, 1 real part plus 3 imaginary parts, the basic form of which is:
wherein q isr,qi,qj,qkFor four real numbers, primitivesSatisfies the following conditions:
quaternion matrixIn the real number domain, it can be decomposed into the following forms:
the quaternion matrix singular value decomposition theorem can be expressed as: for any quaternion matrix Qe(q)=U(q)Λ'V(q)x, let rank (a) ═ r, then there is a quaternion unitary matrix U(q)And V(q)So that
Q(q)=U(q)ΛV(q)λ
Wherein,
and satisfy lambdai∈R,|λ1|≥|λ2|≥…≥|λr|>0,λiAre non-zero singular values.
(b) Quaternion representation
In this embodiment, the four pieces of feature information of the color image obtained by the above analysis are integrated into a quaternion form, as follows:
Q=QC+QLi+QTEj+QVk (11)
wherein Q isCBeing color information of the image, QLBeing spatial position information of the image, QTEFor texture edge information of images, QVIs the local variance of the image.
In this way, an M × N color image can be regarded as a quaternion matrix, and the singular value eigenvectors of the quaternion matrix represent the energy characteristics of the quaternion, so that the quaternion matrix obtained from the color image can also be used to represent the energy characteristics of the corresponding color image.
Since a quaternion matrix q-qr+qii+qjj+qkk, can be represented by its real matrix, so this embodiment converts Q to its corresponding real matrix form for Singular Value Decomposition (SVD). Each quaternion matrix can obtain a singular value eigenvector through singular value decomposition, and each element of the eigenvector is a real number larger than 0. It should be noted that, since the theoretical research on singular value decomposition of the quaternion matrix is mature, the embodiment is considered from the viewpoint of reducing the space and is not described herein again.
Measuring the distortion degree of the image by using Euclidean distances of singular value feature vectors of the original reference image and the distortion image to be evaluated:
the embodiment measures the corresponding image distortion by using Euclidean Distance (Euclidean Distance) of singular value feature vectors of an original reference image and a distorted image to be evaluated, namely measuring the corresponding image distortion
Wherein λ isiAndfor singular value eigenvectors corresponding to the original reference image and the distortion image to be evaluated obtained by calculation, K is the minimum value of the eigenvalues of the two singular value eigenvectors, namely the minimum value of the two quaternion matrix ranks:
according to the color image quality evaluation method based on the HVS and the quaternion, the visual characteristics of human eyes and the quaternion are combined, the evaluation result is more consistent with the effect of the human eyes for perceiving the image, the traditional method of cutting R, G, B three channels is improved, the integrity of color information is well guaranteed, the extracted image information comprises global and local information, and the evaluation result can represent all information of the image more completely. The evaluation result is superior to the conventional SSIM and other typical image quality evaluation algorithms, and the experimental result of this embodiment will be analyzed from two aspects as follows:
1) the quality evaluation method comprises the following steps of carrying out nonlinear fitting on the PSNR, SSIM, MS-SSIM, Y-SVD, GMSD and DMOS values; 2) the quality evaluation method is compared with the performance evaluation of PSNR, SSIM, MS-SSIM, Y-SVD and GMSD.
The quality evaluation pictures are an Image quality evaluation Database Release 2 Image Database provided by the laboratory of Video Engineering (LIVE) and Image for university of state of TEXAS (TEXAS) in the united states, and total 982 images have five distortion types of JPEG2000, JPEG, white gaussian noise, gaussian blur and fast fading rayleigh channel distortion. When algorithm comparisons are made, differences in the dimensions and units of the respective algorithms are produced. Therefore, the objective image quality scores obtained by the algorithm to be evaluated are subjected to nonlinear regression. And carrying out nonlinear regression on the objective image quality original score obtained by the algorithm to be evaluated by using a Logistic function as a nonlinear mapping function:
wherein x is the original quality score obtained by the algorithm to be evaluated provided by the invention, α1234Is a parameter which is self-adaptive and adjusted in the nonlinear regression process. The indexes of the quantitative test evaluation result are MAE/RMSE/CC/SROCC/OR with high recognition degree and high citation times.
1) The Mean Absolute Error (MAE) between scores after subjective and objective nonlinear regression reflects the Mean Error level of objective quality evaluation results and subjective evaluation results, and the smaller the Mean Absolute Error level is, the higher the accuracy of image quality evaluation results is, and the definition formula is as follows:
2) the Root Mean Square Error (RMSE) between scores after subjective and objective nonlinear regression reflects the accuracy of objective evaluation results, and the smaller the Error is, the higher the accuracy of image quality evaluation results is, the definition formula is as follows:
3) the Pearson linear Correlation Coefficient (CC) among scores after the subjective and objective nonlinear regression reflects the consistency and accuracy of objective evaluation results, the value range is [ -1,1], the closer the absolute value of the result is to 1, the better the Correlation of the subjective and objective evaluation method is, and the definition formula is as follows:
4) the Spearman Rank Order Correlation Coefficient (SROCC) among scores after subjective and objective nonlinear regression is a widely applied nonparametric statistical analysis method, reflects monotonicity of objective quality evaluation results and subjective evaluation results, has a value range of [ -1,1], and the closer the absolute value of the results is to 1, the better the consistency of the subjective and objective evaluation methods is, the better the definition formula is as follows:
5) the separation Rate (OR) between scores after the subjective and objective nonlinear regression reflects the stability and the predictability of the objective evaluation model, the value range is [0,1], the smaller the numerical value is, the better the consistency of the subjective and objective evaluation is, the better the predictability of the evaluation model is, and the definition formula is as follows:
where N is the total number of image databases, i.e. 982, xi、yiRespectively representing the subjective and objective evaluation values u of the ith image after nonlinear regressioni、viRespectively representing the ranking of the subjective and objective evaluation value of the ith image in all evaluation values of the whole image database, NoutThe number of images representing an objective evaluation value greater than twice the standard deviation of the subjective evaluation value.
Fig. 3 shows a scatter diagram of each algorithm and subjective evaluation DMOS, respectively, where each point in fig. 3 represents an image, the abscissa of the point is the objective quality evaluation score of the algorithm on the image, the ordinate is the subjective evaluation DMOS value of the image, and the solid line represents a fitting curve. The closer the scatter points are distributed near the fitted curve, the better the consistency of the algorithm and the subjective evaluation result is, and the better the algorithm is. It can be seen that the 982 scatter diagrams provided by the method are closest to the fitted curve, which shows that the effect of the method provided by the invention is better than that of the other four methods after nonlinear fitting.
TABLE 1 LIVE image database image quality evaluation method Performance comparison
From the experimental data in table 1, we found that the quality evaluation method of the present invention is the best among 6 evaluation indexes, the average absolute error and the root mean square error are the smallest, the correlation with subjective visual perception is the highest, and the separation rate is the lowest. Because the PSNR algorithm does not consider the correlation between the respective pixel points, each pixel is treated equally, and the overall performance is the worst among the 6 comparison algorithms. The SSIM algorithm uses structural information of an image to evaluate image quality, and is related to a human visual perception mode. The MS-SSIM algorithm utilizes a multi-resolution analysis technology to evaluate the quality of multi-scale images on the basis of the SSIM algorithm, so that the performance of the MS-SSIM algorithm is superior to that of PSNR and SSIM. The SVD algorithm which only carries out singular value decomposition on the brightness component has obviously better performance than the PSNR, and the algorithm which carries out singular value decomposition on the image has certain superiority. Visually, a curve obtained by nonlinear fitting of the GMSD algorithm and the DMOS value is closest to a straight line, but some scattered points are scattered and are far away from the curve. The last line of table 1 shows that the quality evaluation method of the present invention is significantly superior to the conventional PSNR algorithm, structure similarity SSIM algorithm, multi-scale structure similarity MS-SSIM algorithm, singular value decomposition algorithm SVD, gradient magnitude similarity deviation GMSD algorithm, which indicates that the image quality evaluation algorithm based on quaternion and human eye visual characteristics of the present invention can better reflect the subjective visual perception of human eyes on images.
Because 982 images are composed of 5 image sub-libraries with different distortion types, in order to further prove the superiority of the quality evaluation method of the present invention, the performance comparison between the HVS-QSVD algorithm and the GMSD algorithm, SSIM algorithm is performed for the 5 image sub-libraries in this embodiment. As shown in fig. 4, each three in fig. 4 are a group, which are respectively the non-linear fitting graphs of the HVS-QSVD algorithm, the GMSD algorithm, and the SSIM algorithm, and are five groups. The first group of fig. 4 (a) to the fifth group of fig. 4 (e) are JPEG2000, JPEG, white gaussian noise, gaussian blur, Fast Fading rayleigh channel distortion, respectively. It can be seen that the fitting effect of the HVS-QSVD algorithm provided by the invention and the subjective evaluation value under different distortion types is better than that of GMSD and SSIM algorithms.
In the color image quality evaluation method based on HVS and quaternion described in embodiment 1, in order to make the evaluation result more consistent with human eye perception, a mathematical model is constructed using human eye visual characteristics, and in order to improve the traditional method of cutting R, G, B three channels, feature information of an image is extracted using quaternion singular value decomposition. The experimental result shows that the evaluation result is more consistent with the effect of human eyes for perceiving the image.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention and is not actually limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (1)

1. A color image quality evaluation method based on HVS and quaternion comprises the following steps:
the method comprises the following steps of firstly, constructing a mathematical evaluation model of an original reference image and a distorted image to be evaluated by analyzing human eye visual characteristics, and specifically comprising the following steps:
(1) acquiring RGB tristimulus values of an original reference image and a distorted image to be evaluated;
(2) extracting the spatial position information of the original reference image and the distorted image to be evaluated, and constructing a spatial position function QLAnd texture edge complexity function QTE(ii) a Wherein the spatial position function Q is constructed by using the foveal characteristic of the human visual systemLSaid spatial position function
In the formula, eLThe distance from the pixel point (i, j) for human visual observation to the image center pixel point (M/2, N/2) andquotient of (d); e.g. of the typecIs a constant;
construction of texture edge complexity function Q using masking effects of the human visual systemTESaid texture edge complexity function QTE=QT×QE
In the formula, QTIs the texture complexity function, Q, of the pixel point (i, j)EIs the edge complexity function of pixel point (i, j);
(3) converting an original reference image and a distorted image to be evaluated from an RGB space into a YUV color space, extracting image brightness information and constructing a local variance QVExtracting image brightness and chroma information to construct color function QC(ii) a Wherein,
construction of local variance Q using multi-channel characteristics of the human visual systemVThe local variance of
Wherein, non-overlapping blocks are divided according to the brightness component of the image to obtain Ii,jL is an image partition Ii,jPixel η contained thereinpThe number of the (c) is,
said color function
QC=αQL+βQU
In the formula, QLAs luminance information of the image, QUα and β are the proportion of brightness and chroma respectively, which are the chroma information of the image;
step two, Q is addedL、QV、QTEAs the imaginary part of a quaternion, QCRespectively constructing quaternion matrixes of an original reference image and a distorted image to be evaluated as a real part of a quaternion, and performing singular value decomposition on the quaternion matrixes to obtain singular value eigenvectors of the image;
measuring the image distortion degree by utilizing Euclidean distance of singular value feature vectors of an original reference image and a distorted image to be evaluated, wherein the Euclidean distance
In the formula, λiIs a singular value feature vector of an original reference image,and K is the singular value eigenvector of the distorted image to be evaluated, and is the minimum value of the eigenvalues of the two singular value eigenvectors, namely the minimum value of the two quaternion matrix ranks:
CN201410650245.9A 2014-11-14 2014-11-14 A kind of color image quality evaluation method based on HVS and quaternary number Active CN104361593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410650245.9A CN104361593B (en) 2014-11-14 2014-11-14 A kind of color image quality evaluation method based on HVS and quaternary number

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410650245.9A CN104361593B (en) 2014-11-14 2014-11-14 A kind of color image quality evaluation method based on HVS and quaternary number

Publications (2)

Publication Number Publication Date
CN104361593A CN104361593A (en) 2015-02-18
CN104361593B true CN104361593B (en) 2017-09-19

Family

ID=52528851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410650245.9A Active CN104361593B (en) 2014-11-14 2014-11-14 A kind of color image quality evaluation method based on HVS and quaternary number

Country Status (1)

Country Link
CN (1) CN104361593B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020043279A1 (en) * 2018-08-29 2020-03-05 Telefonaktiebolaget Lm Ericsson (Publ) Video fidelity measure
WO2020043280A1 (en) * 2018-08-29 2020-03-05 Telefonaktiebolaget Lm Ericsson (Publ) Image fidelity measure

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528776B (en) * 2015-08-07 2019-05-10 上海仙梦软件技术有限公司 The quality evaluating method kept for the conspicuousness details of jpeg image format
CN105574854B (en) * 2015-12-10 2019-02-12 小米科技有限责任公司 Determine the monistic method and device of image
WO2018002910A1 (en) * 2016-06-28 2018-01-04 Cognata Ltd. Realistic 3d virtual world creation and simulation for training automated driving systems
CN106683082B (en) * 2016-12-19 2019-08-13 华中科技大学 It is a kind of complete with reference to color image quality evaluation method based on quaternary number
CN106600597B (en) * 2016-12-22 2019-04-12 华中科技大学 It is a kind of based on local binary patterns without reference color image quality evaluation method
WO2018140158A1 (en) * 2017-01-30 2018-08-02 Euclid Discoveries, Llc Video characterization for smart enconding based on perceptual quality optimization
CN107862678B (en) * 2017-10-19 2020-03-17 宁波大学 Fundus image non-reference quality evaluation method
CN109191431A (en) * 2018-07-27 2019-01-11 天津大学 High dynamic color image quality evaluation method based on characteristic similarity
CN109345520A (en) * 2018-09-20 2019-02-15 江苏商贸职业学院 A kind of quality evaluating method of image definition
CN109389591B (en) * 2018-09-30 2020-11-20 西安电子科技大学 Color descriptor-based color image quality evaluation method
CN109903247B (en) * 2019-02-22 2023-02-03 西安工程大学 High-precision graying method for color image based on Gaussian color space correlation
CN110793472B (en) * 2019-11-11 2021-07-27 桂林理工大学 Grinding surface roughness detection method based on quaternion singular value entropy index
CN112950723B (en) * 2021-03-05 2022-08-02 湖南大学 Robot camera calibration method based on edge scale self-adaptive defocus fuzzy estimation
CN116152249B (en) * 2023-04-20 2023-07-07 济宁立德印务有限公司 Intelligent digital printing quality detection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897634A (en) * 2006-06-08 2007-01-17 复旦大学 Image-quality estimation based on supercomplex singular-value decomposition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873724B2 (en) * 2001-08-08 2005-03-29 Mitsubishi Electric Research Laboratories, Inc. Rendering deformable 3D models recovered from videos

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897634A (en) * 2006-06-08 2007-01-17 复旦大学 Image-quality estimation based on supercomplex singular-value decomposition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
3-D Free-form Shape Measuring System Using Unconstrained Range Sensor;REN Tongqun等;《Chinese Journal of Mechanical Engineering》;20111115;第24卷(第6期);第1095-1102页 *
基于HVS特征参数提取的视频质量评价四元数模型;何叶明等;《计算机应用与软件》;20140715;第31卷(第7期);第132-140页 *
基于四元数的彩色图像质量评价方法;王宇庆等;《中北大学学报(自然科学版)》;20100215;第31卷(第1期);第59-64页 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020043279A1 (en) * 2018-08-29 2020-03-05 Telefonaktiebolaget Lm Ericsson (Publ) Video fidelity measure
WO2020043280A1 (en) * 2018-08-29 2020-03-05 Telefonaktiebolaget Lm Ericsson (Publ) Image fidelity measure

Also Published As

Publication number Publication date
CN104361593A (en) 2015-02-18

Similar Documents

Publication Publication Date Title
CN104361593B (en) A kind of color image quality evaluation method based on HVS and quaternary number
Panetta et al. No reference color image contrast and quality measures
Panetta et al. Human-visual-system-inspired underwater image quality measures
Mohammadi et al. Subjective and objective quality assessment of image: A survey
US9706111B2 (en) No-reference image and video quality evaluation
Zheng et al. Qualitative and quantitative comparisons of multispectral night vision colorization techniques
Amirshahi et al. Image quality assessment by comparing CNN features between images
Yue et al. Biologically inspired blind quality assessment of tone-mapped images
Al-Dwairi et al. Optimized true-color image processing
Lee et al. Toward a no-reference image quality assessment using statistics of perceptual color descriptors
He et al. Image quality assessment based on S-CIELAB model
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
CN103780895B (en) A kind of three-dimensional video quality evaluation method
Liu et al. No-reference image quality assessment method based on visual parameters
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
Chen et al. Blind stereo image quality assessment based on binocular visual characteristics and depth perception
CN106960432B (en) A kind of no reference stereo image quality evaluation method
Jadhav et al. Performance evaluation of structural similarity index metric in different colorspaces for HVS based assessment of quality of colour images
Yuan et al. Color image quality assessment with multi deep convolutional networks
Chang et al. Image Quality Evaluation Based on Gradient, Visual Saliency, and Color Information
Khan et al. Sparsity based stereoscopic image quality assessment
CN108171704B (en) No-reference image quality evaluation method based on excitation response
Kim et al. No-reference contrast measurement for color images based on visual stimulus
He et al. Color fractal structure model for reduced-reference colorful image quality assessment
Guo et al. Color difference matrix index for tone-mapped images quality assessment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant