CN110570420A - no-reference contrast distortion image quality evaluation method - Google Patents

no-reference contrast distortion image quality evaluation method Download PDF

Info

Publication number
CN110570420A
CN110570420A CN201910872439.6A CN201910872439A CN110570420A CN 110570420 A CN110570420 A CN 110570420A CN 201910872439 A CN201910872439 A CN 201910872439A CN 110570420 A CN110570420 A CN 110570420A
Authority
CN
China
Prior art keywords
image
color
model
svr
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910872439.6A
Other languages
Chinese (zh)
Other versions
CN110570420B (en
Inventor
卢伟
吕文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201910872439.6A priority Critical patent/CN110570420B/en
Publication of CN110570420A publication Critical patent/CN110570420A/en
Application granted granted Critical
Publication of CN110570420B publication Critical patent/CN110570420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)

Abstract

the invention provides a method for evaluating the quality of an image without reference contrast distortion, which comprises the following steps: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion; constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation; and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated. The method for evaluating the quality of the non-reference contrast distorted image, provided by the invention, not only integrates a multi-color space, but also combines color moment and information entropy characteristics, so that the accuracy and effectiveness of detection are well ensured, and the vacancy in the field of evaluating the quality of the non-reference contrast distorted image is filled.

Description

No-reference contrast distortion image quality evaluation method
Technical Field
The invention relates to the technical field of digital image forensics, in particular to a method for evaluating the quality of an image without reference contrast distortion.
background
with the rapid development of electronic technology and the rapid popularization of digital imaging devices, digital images have been widely used in people's daily offices, studies, and lives. The digital image becomes an important carrier of information, and has irreplaceable effects in various fields such as military affairs, network, archaeology, judicial arts and the like, and simultaneously, with the rapid development of various types of editing software, common users can easily edit, modify and beautify the image by using the tools. If these edited and tampered digital images are regarded as important information, they will probably mislead people and have a bad influence on people's life and even the whole society. Therefore, research on relevant forensic digital image forensic technologies has become an important area of heat.
The visual quality evaluation measures the distortion intensity of the image by modeling the distortion in the image, and the distortion intensity is consistent with the subjective feeling of human eyes. The method is applied to the field of digital image forensics, plays an important role in image analysis, and can provide a new idea for the research of image and video forensics. The image quality evaluation method can be divided into three categories according to the existence of reference images: full reference, half reference and no reference. Since there is generally no reference image in practical applications, the evaluation of the image quality without reference is a hot research.
The existing no-reference image quality evaluation method is divided into two main categories according to distortion types: a specific distortion type evaluation method and a general type evaluation method. However, there is no evaluation method for the contrast distortion type, and the general-purpose evaluation method has poor performance in evaluating the contrast-distorted image due to the specificity of the contrast distortion type to be distinguished from other types.
disclosure of Invention
The invention provides a quality evaluation method for a non-reference contrast distortion image, aiming at overcoming the technical defects that the contrast distortion type is not evaluated in the existing non-reference image quality evaluation field and the performance is poor by applying a general evaluation method.
In order to solve the technical problems, the technical scheme of the invention is as follows:
A no-reference contrast distortion image quality evaluation method comprises the following steps:
S1: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion;
s2: constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation;
s3: and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated.
Wherein the step S1 includes the steps of:
s11: converting the image from an RGB color space to an XYZ color space, and converting the image from the XYZ color space to a CIELab color space; wherein, three channels of the RGB color space are respectively noted as: r, G, B; the three channels of the CIELab color space are respectively noted as: l, a, b;
S12: extracting the first to third order central color moment features of the 6 color channels obtained in the step S11, and recording the features aswherein, i ═ {1, 2, 3} represents the order of the color moments, and j ═ { R, G, B, L, a, B } represents the color channels of the different color spaces;
s13: extracting information entropy characteristics of the 6 color channels obtained in the step S11, and recording the information entropy characteristics as Hjand j ═ { R, G, B, L, a, B } denotes color channels of different color spaces.
in step S11, the conversion formula of the image from the RGB color space to the XYZ color space is:
The image is converted into a CIELab color space from an XYZ color space, and the specific conversion formula is as follows:
Thus, from each picture, 6 color channel components are obtained, namely: r, G, B and L, a, B;
In step S12, an image is represented by I, the color moments are multi-step, and are set to 3, and the central moment is used for calculation, which is specifically calculated as:
firstMoment(I)=E(I)
wherein E is an averaging operator; extracting color moment features of the 6 color channels obtained in the step S11, and recording the color moment features aswherein, i ═ {1, 2, 3} represents the order of the color moments, and j ═ { R, G, B, L, a, B } represents the color channels of the different color spaces;
In step S13, the information entropy is used to describe the information complexity of the picture, and the calculation formula is:
Wherein, Pi(I) Representing the probability of the occurrence of a certain pixel with the intensity i in the image; information entropy features are respectively extracted from the 6 color channels obtained in the S11 and recorded as HjJ ═ { R, G, B, L, a, B } represents color channels of different color spaces; therefore, the color moment features and the information entropy features are combined to obtain a feature vector f for describing image contrast distortion:
in step S2, a prediction model for image quality evaluation is constructed using an SVR model.
wherein, the step S2 specifically includes the following steps:
S21: combining the image contrast distortion degree distortion characteristic set extracted in the step S1 with the prior score to form a data set;
S22: initializing an SVR model, and setting initial values of all parameters in the SVR model;
s23: dividing the data set obtained in the step S21 into two parts randomly, and respectively using the two parts as a training set and a test set in the SVR parameter optimizing process;
S24: optimizing the parameters by using a grid method, and setting the optimized parameters as initial parameters to complete the initialization setting of the model;
s25: and training the initialized SVR model by utilizing the training set and the testing set to obtain a prediction model for evaluating the image quality.
in step S21, the prior score is a MOS \ DMOS value of each picture in the public data set, and the model is optimized as the prior of the picture quality in the training process.
Wherein the data set obtained in the step S21 is expressed as { (f)1,Q1,),…,(fk,Qk) -where k is the number of distorted images in the dataset;
setting the version of the SVR model in step S22, and using the v-SVR version based on the RBF kernel function in the LIBSVM package;
In step S23, a Hold-Out partitioning method is used to randomly partition the data set obtained in step S21 into two parts, namely α% and (100- α)%, which are respectively used as a training set and a test set in the SVR parameter optimization process, wherein the training set is used to train the model, and the test set is used to evaluate the generalization ability of the model;
in step S24, the parameter optimization of the SVR model is completed by using a grid method, and the parameter to be optimized is (C, γ, epsilon); the parameters are required to be set to be (cmin, cmax, gmin, gmax, v, cstep, gstep, msestrep), wherein cmin and cmax respectively represent the minimum value and the maximum value of the parameter C; gmin and gmax are the maximum and minimum values of the parameter γ, respectively; v represents a parameter of SVM cross validation; cstep, gstep and msestrep are respectively the progress step length of three parameters (C, gamma, epsilon); the output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon).
wherein, the step S3 specifically includes the following steps:
S31: extracting a contrast distortion descriptor of the image to be evaluated by adopting the method in the step S1, and constructing a feature set for an SVR prediction model;
S32: and inputting the feature set into the trained prediction model, calculating to obtain a prediction result, and finishing the evaluation of the image quality of the image to be evaluated.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method for evaluating the quality of the non-reference contrast distortion image comprises the steps of firstly converting an input image from RGB into CIELab color space, taking color moments and information entropy as the representation of contrast distortion measurement, wherein the color moments can well describe the distribution of color information and can well describe the contrast distortion of the image; and finally, a support vector regression is used for training a quality model to map all the characteristics to an objective quality score, so that not only is a multi-color space fused, but also the color moment and the information entropy characteristic are combined, the accuracy and the effectiveness of detection are well ensured, and the vacancy of the field of quality evaluation of the image without reference contrast distortion is filled.
drawings
FIG. 1 is a schematic flow diagram of the process of the invention;
fig. 2 is an image to be evaluated in example 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
as shown in fig. 1, a method for evaluating image quality without reference contrast distortion includes the following steps:
S1: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion;
S2: constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation;
s3: and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated.
In a specific implementation process, the method for evaluating the quality of the image without reference contrast distortion, provided by the invention, comprises the steps of firstly converting an input image from RGB into CIELab color space, and taking color moments and information entropy as the representation of contrast distortion measurement, wherein the color moments can well describe the distribution of color information and can well describe the contrast distortion of the image; and finally, a support vector regression is used for training a quality model to map all the characteristics to an objective quality score, so that not only is a multi-color space fused, but also the color moment and the information entropy characteristic are combined, the accuracy and the effectiveness of detection are well ensured, and the vacancy of the field of quality evaluation of the image without reference contrast distortion is filled.
example 2
more specifically, in addition to embodiment 1, the image shown in fig. 2 is evaluated, and the step S1 includes the following steps:
s11: converting the image from an RGB color space to an XYZ color space, and converting the image from the XYZ color space to a CIELab color space; wherein, three channels of the RGB color space are respectively noted as: r, G, B; the three channels of the CIELab color space are respectively noted as: l, a, b;
s12: extracting the first to third order central color moment features of the 6 color channels obtained in the step S11, and recording the features aswherein, i ═ {1, 2, 3} represents the order of the color moments, and j ═ { R, G, B, L, a, B } represents the color channels of the different color spaces;
S13: to stepthe information entropy features extracted from the 6 color channels obtained in step S11 are recorded as HjAnd j ═ { R, G, B, L, a, B } denotes color channels of different color spaces.
more specifically, in step S11, the conversion formula of the image from the RGB color space to the XYZ color space is:
the image is converted into a CIELab color space from an XYZ color space, and the specific conversion formula is as follows:
Thus, from each picture, 6 color channel components are obtained, namely: r, G, B and L, a, B;
in step S12, an image is represented by I, the color moments are multi-step, and are set to 3, and the central moment is used for calculation, which is specifically calculated as:
firstMoment(I)=E(I)
Wherein E is an averaging operator; extracting color moment features of the 6 color channels obtained in the step S11, and recording the color moment features aswherein, i ═ {1, 2, 3} represents the order of the color moments, and j ═ { R, G, B, L, a, B } represents the color channels of the different color spaces;
in step S13, the information entropy is used to describe the information complexity of the picture, and the calculation formula is:
Wherein, Pi(I) representing the probability of the occurrence of a certain pixel with the intensity i in the image; information entropy features are respectively extracted from the 6 color channels obtained in the S11 and recorded as HjJ ═ { R, G, B, L, a, B } represents color channels of different color spaces; therefore, the color moment features and the information entropy features are combined to obtain a feature vector f for describing image contrast distortion:
more specifically, in step S2, a prediction model for image quality evaluation is constructed using an SVR model.
More specifically, the step S2 specifically includes the following steps:
s21: combining the image contrast distortion degree distortion characteristic set extracted in the step S1 with the prior score to form a data set;
s22: initializing an SVR model, and setting initial values of all parameters in the SVR model;
S23: dividing the data set obtained in the step S21 into two parts randomly, and respectively using the two parts as a training set and a test set in the SVR parameter optimizing process;
S24: optimizing the parameters by using a grid method, and setting the optimized parameters as initial parameters to complete the initialization setting of the model;
S25: and training the initialized SVR model by utilizing the training set and the testing set to obtain a prediction model for evaluating the image quality.
More specifically, in step S21, the prior score is a MOS/DMOS value of each picture in the public data set, and the model is optimized as a prior of picture quality in the training process. For example, the CSIQ data set was created by the institute of electrical and computer engineering, oklahoma state university, usa, and contains 30 reference images and 866 distorted images, the distortion types including JPEG compression, JPEG2000 compression, global contrast reduction, additive gaussian pink noise, additive white gaussian noise, and gaussian blur 6. The DMOS values of this database are statistically derived from about 5000 data given by 25 observers, with DMOS values ranging from [0,1 ].
More specifically, the data set obtained in step S21 is expressed as { (f)1,Q1,),…,(fk,Qk) -where k is the number of distorted images in the dataset; the contrast-distorted image used to train the model in this example is from the common image quality database CID2013, k being 400, so the feature vector set is a 24 × 400 matrix.
a version of the SVR model is set in step S22 (see "a tutorial on supported vector regression, statistics and computing" below), and the v-SVR version based on the RBF kernel in the LIBSVM package is used;
In step S23, a Hold-Out partitioning method is used to randomly partition the data set obtained in step S21 into two parts, namely α% and (100- α)%, which are respectively used as a training set and a test set in the SVR parameter optimization process, wherein the training set is used to train the model, and the test set is used to evaluate the generalization ability of the model; in this example, α is 80.
in step S24, the parameter optimization of the SVR model is completed by using a grid method, and the parameter to be optimized is (C, γ, epsilon); the parameters are required to be set to be (cmin, cmax, gmin, gmax, v, cstep, gstep, msestrep), wherein cmin and cmax respectively represent the minimum value and the maximum value of the parameter C; gmin and gmax are the maximum and minimum values of the parameter γ, respectively; v represents a parameter of SVM cross validation; cstep, gstep and msestrep are respectively the progress step length of three parameters (C, gamma, epsilon); the output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon).
In a specific implementation, (cmin, cmax, gmin, gmax, v, cstep, gstep) is set to (-8, 8, -8, 8, 3, 0.1, 0.1, 4). The output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon); the parameters (C, γ, ε) are set to (4.9246,0.2500, 0.0385).
more specifically, the step S3 specifically includes the following steps:
s31: extracting a contrast distortion descriptor of the image to be evaluated by adopting the method in the step S1, and constructing a feature set for an SVR prediction model;
in the specific implementation process, the number of the images to be evaluated is 1, so that the calculated feature vector is a 24 × 1 matrix.
S32: inputting the feature set into a trained prediction model, calculating to obtain a prediction result, and finishing the evaluation of the image quality of the image to be evaluated, wherein the specific expression is as follows:
score=Model(f)
and the predicted score is the image quality of the image to be evaluated. The prediction result in this example is a real number in the interval 1, 5, with a value of 2.8902.
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (8)

1. A method for evaluating the quality of an image without reference contrast distortion is characterized by comprising the following steps:
S1: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion;
s2: constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation;
S3: and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated.
2. the method for evaluating the quality of an image without reference contrast distortion according to claim 1, wherein the step S1 comprises the steps of:
S11: converting the image from an RGB color space to an XYZ color space, and converting the image from the XYZ color space to a CIELab color space; wherein, three channels of the RGB color space are respectively noted as: r, G, B; the three channels of the CIELab color space are respectively noted as: l, a, b;
s12: extracting the first to third order central color moment features of the 6 color channels obtained in the step S11, and recording the features aswherein, i ═ {1, 2, 3} represents the order of the color moments, and j ═ { R, G, B, L, a, B } represents the color channels of the different color spaces;
s13: extracting information entropy characteristics of the 6 color channels obtained in the step S11, and recording the information entropy characteristics as Hjand j ═ R, G, B, L, a, B } represents color channels of different color spaces.
3. The method for evaluating image quality without reference contrast distortion according to claim 2, wherein in step S11, the image is converted from RGB color space to XYZ color space by the following specific conversion formula:
the image is converted into a CIELab color space from an XYZ color space, and the specific conversion formula is as follows:
Thus, from each picture, 6 color channel components are obtained, namely: r, G, B and L, a, B;
in step S12, an image is represented by I, the color moments are multi-step, and are set to 3, and the central moment is used for calculation, which is specifically calculated as:
firstMoment(I)=E(I)
wherein E is an averaging operator; extracting color moment features of the 6 color channels obtained in the step S11, and recording the color moment features aswherein, i ═ {1, 2, 3} represents the order of the color moments, and j ═ { R, G, B, L, a, B } represents the color channels of the different color spaces;
in step S13, the information entropy is used to describe the information complexity of the picture, and the calculation formula is:
Wherein, Pi(I) Representing the probability of the occurrence of a certain pixel with the intensity i in the image; information entropy features are respectively extracted from the 6 color channels obtained in the S11 and recorded as HjJ ═ { R, G, B, L, a, B } represents color channels of different color spaces; therefore, the color moment features and the information entropy features are combined to obtain a feature vector f for describing image contrast distortion:
4. The method for evaluating image quality without reference contrast distortion according to claim 2, wherein in step S2, a prediction model for image quality evaluation is constructed using an SVR model.
5. The method for evaluating the quality of an image without reference contrast distortion according to claim 4, wherein the step S2 specifically comprises the following steps:
s21: combining the image contrast distortion degree distortion characteristic set extracted in the step S1 with the prior score to form a data set;
S22: initializing an SVR model, and setting initial values of all parameters in the SVR model;
S23: dividing the data set obtained in the step S21 into two parts randomly, and respectively using the two parts as a training set and a test set in the SVR parameter optimizing process;
s24: optimizing the parameters by using a grid method, and setting the optimized parameters as initial parameters to complete the initialization setting of the model;
s25: and training the initialized SVR model by utilizing the training set and the testing set to obtain a prediction model for evaluating the image quality.
6. The method of claim 5, wherein in step S21, the prior score is the MOS/DMOS value of each picture in the public data set, and the model is optimized as the prior of picture quality in the training process.
7. The method of claim 5, wherein said data set obtained in step S21 is expressed as { (f)1,Q1,),...,(fk,Qk) -where k is the number of distorted images in the dataset;
Setting the version of the SVR model in step S22, and using the v-SVR version based on the RBF kernel function in the LIBSVM package;
in step S23, a Hold-Out partitioning method is used to randomly partition the data set obtained in step S21 into two parts, namely α% and (100- α)%, which are respectively used as a training set and a test set in the SVR parameter optimization process, wherein the training set is used to train the model, and the test set is used to evaluate the generalization ability of the model;
in step S24, the parameter optimization of the SVR model is completed by using a grid method, and the parameter to be optimized is (C, γ, epsilon); the parameters are required to be set to be (cmin, cmax, gmin, gmax, v, cstep, gstep, msestrep), wherein cmin and cmax respectively represent the minimum value and the maximum value of the parameter C; gmin and gmax are the maximum and minimum values of the parameter γ, respectively; v represents a parameter of SVM cross validation; cstep, gstep and msestrep are respectively the progress step length of three parameters (C, gamma, epsilon); the output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon).
8. the method for evaluating the quality of an image without reference contrast distortion according to claim 5, wherein the step S3 specifically comprises the following steps:
S31: extracting a contrast distortion descriptor of the image to be evaluated by adopting the method in the step S1, and constructing a feature set for an SVR prediction model;
S32: and inputting the feature set into the trained prediction model, calculating to obtain a prediction result, and finishing the evaluation of the image quality of the image to be evaluated.
CN201910872439.6A 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method Active CN110570420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910872439.6A CN110570420B (en) 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910872439.6A CN110570420B (en) 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method

Publications (2)

Publication Number Publication Date
CN110570420A true CN110570420A (en) 2019-12-13
CN110570420B CN110570420B (en) 2023-04-07

Family

ID=68780171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910872439.6A Active CN110570420B (en) 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method

Country Status (1)

Country Link
CN (1) CN110570420B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192258A (en) * 2020-01-02 2020-05-22 广州大学 Image quality evaluation method and device
CN111652854A (en) * 2020-05-13 2020-09-11 中山大学 No-reference image quality evaluation method based on image high-frequency information
CN112257711A (en) * 2020-10-26 2021-01-22 哈尔滨市科佳通用机电股份有限公司 Method for detecting damage fault of railway wagon floor
CN112446879A (en) * 2021-01-06 2021-03-05 天津科技大学 Contrast distortion image quality evaluation method based on image entropy
CN112446878A (en) * 2021-01-04 2021-03-05 天津科技大学 Color image quality evaluation method based on joint entropy
CN113436167A (en) * 2021-06-25 2021-09-24 湖南工商大学 No-reference color image quality evaluation method based on deep learning and visual perception

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243435A1 (en) * 2010-03-30 2011-10-06 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for color distortion correction of image by estimate of correction matrix
CN106600597A (en) * 2016-12-22 2017-04-26 华中科技大学 Non-reference color image quality evaluation method based on local binary pattern
CN107610093A (en) * 2017-08-02 2018-01-19 西安理工大学 Full-reference image quality evaluating method based on similarity feature fusion
CN108322733A (en) * 2018-01-17 2018-07-24 宁波大学 It is a kind of without refer to high dynamic range images method for evaluating objective quality
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 Non-reference picture quality appraisement method based on full convolutional neural networks
CN109218716A (en) * 2018-10-22 2019-01-15 天津大学 Based on color statistics and comentropy without reference tone mapping graph image quality evaluation method
CN109871852A (en) * 2019-01-05 2019-06-11 天津大学 A kind of no reference tone mapping graph image quality evaluation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243435A1 (en) * 2010-03-30 2011-10-06 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for color distortion correction of image by estimate of correction matrix
CN106600597A (en) * 2016-12-22 2017-04-26 华中科技大学 Non-reference color image quality evaluation method based on local binary pattern
CN107610093A (en) * 2017-08-02 2018-01-19 西安理工大学 Full-reference image quality evaluating method based on similarity feature fusion
CN108322733A (en) * 2018-01-17 2018-07-24 宁波大学 It is a kind of without refer to high dynamic range images method for evaluating objective quality
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 Non-reference picture quality appraisement method based on full convolutional neural networks
CN109218716A (en) * 2018-10-22 2019-01-15 天津大学 Based on color statistics and comentropy without reference tone mapping graph image quality evaluation method
CN109871852A (en) * 2019-01-05 2019-06-11 天津大学 A kind of no reference tone mapping graph image quality evaluation method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
AJINKYA M. PUND ET AL.: "A Spatial Domain Feature Based Approach For No Reference Image Quality Assessment of JPEG Compressed Images", 《IEEE》 *
ANISH MITTAL ET AL.: "No-Reference Image Quality Assessment in the Spatial Domain", 《IEEE》 *
刘春等: "基于卷积神经网络的对比度失真图像质量评价", 《微电子学与计算机》 *
林鸿飞 等: "基于局部多项式法的RGB到CIELab颜色空间转换算法的研究", 《印前技术》 *
王志明: "无参考图像质量评价综述", 《自动化学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192258A (en) * 2020-01-02 2020-05-22 广州大学 Image quality evaluation method and device
CN111652854A (en) * 2020-05-13 2020-09-11 中山大学 No-reference image quality evaluation method based on image high-frequency information
CN112257711A (en) * 2020-10-26 2021-01-22 哈尔滨市科佳通用机电股份有限公司 Method for detecting damage fault of railway wagon floor
CN112257711B (en) * 2020-10-26 2021-04-09 哈尔滨市科佳通用机电股份有限公司 Method for detecting damage fault of railway wagon floor
CN112446878A (en) * 2021-01-04 2021-03-05 天津科技大学 Color image quality evaluation method based on joint entropy
CN112446878B (en) * 2021-01-04 2023-03-14 天津科技大学 Color image quality evaluation method based on joint entropy
CN112446879A (en) * 2021-01-06 2021-03-05 天津科技大学 Contrast distortion image quality evaluation method based on image entropy
CN113436167A (en) * 2021-06-25 2021-09-24 湖南工商大学 No-reference color image quality evaluation method based on deep learning and visual perception
CN113436167B (en) * 2021-06-25 2022-04-26 湖南工商大学 No-reference color image quality evaluation method based on deep learning and visual perception

Also Published As

Publication number Publication date
CN110570420B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110570420B (en) No-reference contrast distortion image quality evaluation method
Gu et al. The analysis of image contrast: From quality assessment to automatic enhancement
Wang et al. Reduced-reference quality assessment of screen content images
Kundu et al. No-reference quality assessment of tone-mapped HDR pictures
US10297000B2 (en) High dynamic range image information hiding method
Sazzad et al. No reference image quality assessment for JPEG2000 based on spatial features
Kundu et al. Large-scale crowdsourced study for tone-mapped HDR pictures
Zhang et al. Fine-grained quality assessment for compressed images
CN110827193B (en) Panoramic video significance detection method based on multichannel characteristics
Tian et al. A multi-order derivative feature-based quality assessment model for light field image
Wang et al. Image quality assessment based on local linear information and distortion-specific compensation
CN102696220A (en) Method and system for transforming a digital image from a low dynamic range (LDR) image to a high dynamic range (HDR) image
Li et al. Subjective and objective quality assessment of compressed screen content videos
Chen et al. Naturalization module in neural networks for screen content image quality assessment
CN111724310B (en) Training method of image restoration model, image restoration method and device
US10445865B1 (en) Method and apparatus for converting low dynamic range video to high dynamic range video
Li et al. Recent advances and challenges in video quality assessment
Yang et al. Full reference image quality assessment by considering intra-block structure and inter-block texture
Pinson Why no reference metrics for image and video quality lack accuracy and reproducibility
Katsenou et al. BVI-SynTex: A synthetic video texture dataset for video compression and quality assessment
Zhu et al. A metric for video blending quality assessment
Zhang et al. Perceptual quality assessment for fine-grained compressed images
Wang et al. Low-light images in-the-wild: A novel visibility perception-guided blind quality indicator
Weingerl et al. Development of a machine learning model for extracting image prominent colors
CN116962612A (en) Video processing method, device, equipment and storage medium applied to simulation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant