CN110570420B - No-reference contrast distortion image quality evaluation method - Google Patents

No-reference contrast distortion image quality evaluation method Download PDF

Info

Publication number
CN110570420B
CN110570420B CN201910872439.6A CN201910872439A CN110570420B CN 110570420 B CN110570420 B CN 110570420B CN 201910872439 A CN201910872439 A CN 201910872439A CN 110570420 B CN110570420 B CN 110570420B
Authority
CN
China
Prior art keywords
image
color
model
svr
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910872439.6A
Other languages
Chinese (zh)
Other versions
CN110570420A (en
Inventor
卢伟
吕文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201910872439.6A priority Critical patent/CN110570420B/en
Publication of CN110570420A publication Critical patent/CN110570420A/en
Application granted granted Critical
Publication of CN110570420B publication Critical patent/CN110570420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a method for evaluating the quality of an image without reference contrast distortion, which comprises the following steps: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion; constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation; and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated. The method for evaluating the quality of the non-reference contrast distorted image, provided by the invention, not only integrates a multi-color space, but also combines color moment and information entropy characteristics, so that the accuracy and effectiveness of detection are well ensured, and the vacancy in the field of evaluating the quality of the non-reference contrast distorted image is filled.

Description

No-reference contrast distortion image quality evaluation method
Technical Field
The invention relates to the technical field of digital image forensics, in particular to a method for evaluating the quality of an image without reference contrast distortion.
Background
With the rapid development of electronic technology and the rapid popularization of digital imaging devices, digital images have been widely used in people's daily offices, studies, and lives. The digital image becomes an important carrier of information, and has irreplaceable effects in various fields such as military affairs, network, archaeology, judicial arts and the like, and simultaneously, with the rapid development of various types of editing software, common users can easily edit, modify and beautify the image by using the tools. If these edited and tampered digital images are regarded as important information, they will probably mislead people and have a bad influence on people's life and even the whole society. Therefore, research on relevant forensic digital image forensic technologies has become an important area of heat.
The visual quality evaluation measures the distortion intensity of the image by modeling the distortion in the image, and the distortion intensity is consistent with the subjective feeling of human eyes. The method is applied to the field of digital image forensics, plays an important role in image analysis, and can provide a new idea for the research of image and video forensics. The image quality evaluation method can be divided into three categories according to the existence of reference images: full reference, half reference and no reference. Since there is generally no reference image in practical applications, the evaluation of the image quality without reference is a hot research.
The existing no-reference image quality evaluation method is divided into two main categories according to distortion types: a specific distortion type evaluation method and a general type evaluation method. However, there is no evaluation method for the contrast distortion type, and the general-purpose evaluation method has poor performance in evaluating the contrast-distorted image due to the specificity of the contrast distortion type to be distinguished from other types.
Disclosure of Invention
The invention provides a quality evaluation method for a non-reference contrast distortion image, aiming at overcoming the technical defects that the contrast distortion type is not evaluated in the existing non-reference image quality evaluation field and the performance is poor by applying a general evaluation method.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a no-reference contrast distortion image quality evaluation method comprises the following steps:
s1: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion;
s2: constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation;
s3: and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated.
Wherein the step S1 includes the steps of:
s11: converting the image from an RGB color space to an XYZ color space, and converting the image from the XYZ color space to a CIELab color space; wherein, three channels of the RGB color space are respectively noted as: r, G, B; the three channels of the CIELab color space are respectively noted as: l, a, b;
s12: extracting the characteristics of the first to third-order central color moments from the 6 color channels obtained in the step S11, and recording the characteristics as
Figure BDA0002203238940000025
Wherein the order of the color moments is represented by i = {1,2,3} and the orders of the different color spaces are represented by j = { R, G, B, L, a, B }A color channel;
s13: extracting information entropy characteristics of the 6 color channels obtained in the step S11, and recording the information entropy characteristics as H j Color channels of different color spaces are represented by j = { R, G, B, L, a, B }.
In step S11, the image is converted from RGB color space to XYZ color space by the following specific conversion formula:
Figure BDA0002203238940000021
the image is converted into a CIELab color space from an XYZ color space, and the specific conversion formula is as follows:
Figure BDA0002203238940000022
thus, from each picture, 6 color channel components are obtained, namely: r, G, B and L, a, B;
in step S12, an image is represented by I, the color moments are multi-order, and are set to 3, and the central moment is used for calculation, where the specific calculation formula is:
firstMoment(I)=E(I)
Figure BDA0002203238940000023
Figure BDA0002203238940000024
wherein E is an averaging operator; and (5) carrying out color moment feature extraction on the 6 color channels obtained in the step (S11), and recording as
Figure BDA0002203238940000031
Wherein i = {1,2,3} denotes the order of the color moment, and j = { R, G, B, L, a, B } denotes the color channels of the different color spaces;
in step S13, the information entropy is used to describe the information complexity of the picture, and the calculation formula is as follows:
Figure BDA0002203238940000032
wherein, P i (I) Representing the probability of the occurrence of a certain pixel with the intensity i in the image; respectively extracting information entropy characteristics of the 6 color channels obtained in the S11, and recording the information entropy characteristics as H j J = { R, G, B, L, a, B } represents color channels of different color spaces; therefore, the color moment features and the information entropy features are combined to obtain a feature vector f for describing image contrast distortion:
Figure BDA0002203238940000033
in step S2, a prediction model for image quality evaluation is constructed using an SVR model.
Wherein, the step S2 specifically includes the following steps:
s21: combining the image contrast distortion degree distortion characteristic set extracted in the step S1 with the prior score to form a data set;
s22: initializing an SVR model, and setting initial values of all parameters in the SVR model;
s23: the data set obtained in the step S21 is randomly divided into two parts which are respectively used as a training set and a test set in the SVR parameter optimizing process;
s24: optimizing the parameters by using a grid method, setting the parameters obtained by optimizing as initial parameters, and finishing the initialization setting of the model;
s25: and training the initialized SVR model by utilizing the training set and the testing set to obtain a prediction model for evaluating the image quality.
In step S21, the prior score is a MOS \ DMOS value of each picture in the public data set, and is used as a prior of picture quality to optimize the model in the training process.
Wherein the data set obtained in the step S21 is expressed as { (f) 1 ,Q 1, ),…,(f k ,Q k ) H, where k is the number of distorted images in the dataset;
setting the version of the SVR model in step S22, and using a v-SVR version based on the RBF kernel function in the LIBSVM package;
in step S23, a Hold-Out partitioning method is used to randomly partition the data set obtained in step S21 into two parts, namely α% and (100- α)%, which are respectively used as a training set and a test set in the SVR parameter optimization process, where the training set is used to train the model and the test set is used to evaluate the generalization ability of the model;
in step S24, a grid method is used to complete parameter optimization of the SVR model, and the parameters to be optimized are (C, γ, epsilon); the parameters are required to be set to be (cmin, cmax, gmin, gmax, v, cstep, gstep, msestrep), wherein cmin and cmax respectively represent the minimum value and the maximum value of the parameter C; gmin and gmax are the maximum and minimum values of the parameter gamma, respectively; v represents a parameter of SVM cross validation; cstep, gstep and msestrep are the progress step lengths of three parameters (C, gamma, epsilon) respectively; the output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon).
Wherein, the step S3 specifically includes the following steps:
s31: extracting a contrast distortion descriptor of the image to be evaluated by adopting the method in the step S1, and constructing a feature set for an SVR prediction model;
s32: and inputting the feature set into the trained prediction model, calculating to obtain a prediction result, and finishing the evaluation of the image quality of the image to be evaluated.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method for evaluating the quality of the non-reference contrast distortion image comprises the steps of firstly converting an input image from RGB into CIELab color space, taking color moments and information entropy as the representation of contrast distortion measurement, wherein the color moments can well describe the distribution of color information and can well describe the contrast distortion of the image; and finally, a support vector regression is used for training a quality model to map all the characteristics to an objective quality score, so that not only is a multi-color space fused, but also the color moment and the information entropy characteristic are combined, the accuracy and the effectiveness of detection are well ensured, and the vacancy of the field of quality evaluation of the image without reference contrast distortion is filled.
Drawings
FIG. 1 is a schematic flow diagram of the process of the invention;
fig. 2 is an image to be evaluated in example 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the present embodiments, certain elements of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a method for evaluating quality of an image without reference contrast distortion includes the following steps:
s1: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion;
s2: constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation;
s3: and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated.
In a specific implementation process, the method for evaluating the quality of the image without reference contrast distortion, provided by the invention, comprises the steps of firstly converting an input image from RGB into CIELab color space, and taking color moments and information entropy as the representation of contrast distortion measurement, wherein the color moments can well describe the distribution of color information and can well describe the contrast distortion of the image; and finally, the support vector regression is used for training the quality model to map all the characteristics to an objective quality score, so that the multi-color space is fused, the color moment and the information entropy characteristics are combined, the accuracy and the effectiveness of detection are well ensured, and the vacancy in the field of quality evaluation of the image without reference contrast distortion is filled.
Example 2
More specifically, on the basis of embodiment 1, the image shown in fig. 2 is evaluated, and the step S1 includes the following steps:
s11: converting an image from an RGB color space to an XYZ color space, and converting the image from the XYZ color space to a CIELab color space; wherein, three channels of the RGB color space are respectively noted as: r, G, B; the three channels of the CIELab color space are respectively noted as: l, a, b;
s12: extracting the first to third order central color moment features of the 6 color channels obtained in the step S11, and recording the features as
Figure BDA0002203238940000052
Wherein the order of the color moments is represented by i = {1,2,3} and the color channels of the different color spaces are represented by j = { R, G, B, L, a, B };
s13: extracting information entropy characteristics of the 6 color channels obtained in the step S11, and recording the information entropy characteristics as H j The color channels of the different color spaces are denoted by j = { R, G, B, L, a, B }.
More specifically, in step S11, the image is converted from RGB color space to XYZ color space by the following specific conversion formula:
Figure BDA0002203238940000051
the image is converted into a CIELab color space from an XYZ color space, and the specific conversion formula is as follows:
Figure BDA0002203238940000061
thus, from each picture, 6 color channel components are obtained, namely: r, G, B and L, a, B;
in step S12, an image is represented by I, the color moments are multi-order, and are set to 3, and the central moment is used for calculation, where the specific calculation formula is:
firstMoment(I)=E(I)
Figure BDA0002203238940000062
Figure BDA0002203238940000063
wherein E is an averaging operator; and (5) carrying out color moment feature extraction on the 6 color channels obtained in the step (S11), and recording as
Figure BDA0002203238940000066
Wherein i = {1,2,3} denotes the order of the color moment, and j = { R, G, B, L, a, B } denotes the color channels of the different color spaces;
in step S13, the information entropy is used to describe the information complexity of the picture, and the calculation formula is as follows:
Figure BDA0002203238940000064
wherein, P i (I) Representing the probability of the occurrence of a certain pixel with the intensity i in the image; respectively extracting information entropy characteristics of the 6 color channels obtained in the S11, and recording the information entropy characteristics as H j J = { R, G, B, L, a, B } represents color channels of different color spaces; therefore, the color moment features and the information entropy features are combined to obtain a feature vector f for describing image contrast distortion:
Figure BDA0002203238940000065
more specifically, in step S2, a prediction model for image quality evaluation is constructed using an SVR model.
More specifically, the step S2 specifically includes the following steps:
s21: combining the image contrast distortion degree distortion characteristic set extracted in the step S1 with the prior score to form a data set;
s22: initializing an SVR model, and setting initial values of all parameters in the SVR model;
s23: randomly dividing the data set obtained in the step S21 into two parts which are respectively used as a training set and a test set in the SVR parameter optimizing process;
s24: optimizing the parameters by using a grid method, and setting the optimized parameters as initial parameters to complete the initialization setting of the model;
s25: and training the initialized SVR model by utilizing the training set and the testing set to obtain a prediction model for evaluating the image quality.
More specifically, in step S21, the prior score is a MOS \ DMOS value of each picture in the public data set, and the model is optimized as the prior of the picture quality in the training process. For example, the CSIQ data set was established by the institute of electrical and computer engineering, oklahoma state university, usa, and contains 30 reference images and 866 distorted images, distortion types including JPEG compression, JPEG2000 compression, global contrast reduction, additive gaussian pink noise, additive white gaussian noise, and gaussian blur 6. The DMOS values of this database are statistically derived from about 5000 data given by 25 observers, with DMOS values ranging from [0,1].
More specifically, the data set obtained in step S21 is expressed as { (f) 1 ,Q 1, ),…,(f k ,Q k ) -where k is the number of distorted images in the dataset; the contrast-distorted image used to train the model in this example is from the common image quality database CID2013, k =400, so the feature vector set is a 24 × 400 matrix.
A version of the SVR model is set in step S22 (see "a custom on supported vector regression. Statistics and computing" in the text), and the v-SVR version based on the RBF kernel in the LIBSVM package is used;
in step S23, a Hold-Out partitioning method is used to randomly partition the data set obtained in step S21 into two parts, namely α% and (100- α)%, which are respectively used as a training set and a test set in the SVR parameter optimization process, where the training set is used to train the model and the test set is used to evaluate the generalization ability of the model; in this example, α =80.
In step S24, a grid method is used to complete parameter optimization of the SVR model, and the parameters to be optimized are (C, γ, epsilon); the parameters are required to be set to be (cmin, cmax, gmin, gmax, v, cstep, gstep, msestrep), wherein cmin and cmax respectively represent the minimum value and the maximum value of the parameter C; gmin and gmax are the maximum and minimum values of the parameter γ, respectively; v represents a parameter of SVM cross validation; cstep, gstep and msestrep are respectively the progress step length of three parameters (C, gamma, epsilon); the output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon).
In a specific implementation, (cmin, cmax, gmin, gmax, v, cstep, gstep) is set to (-8, 3,0.1, 4). The output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon); the parameters (C, gamma, epsilon) were set to (4.9246, 0.2500, 0.0385).
More specifically, the step S3 specifically includes the following steps:
s31: extracting a contrast distortion descriptor of the image to be evaluated by adopting the method in the step S1, and constructing a feature set for an SVR prediction model;
in the specific implementation process, the number of the images to be evaluated is 1, so the calculated feature vector is a 24 × 1 matrix.
S32: inputting the feature set into a trained prediction model, calculating to obtain a prediction result, and finishing the evaluation of the image quality of the image to be evaluated, wherein the specific expression is as follows:
score=Model(f)
and the predicted score is the image quality of the image to be evaluated. The prediction in this example is a real number in the interval 1,5, with a value of 2.8902.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. A method for evaluating the quality of an image without reference contrast distortion is characterized by comprising the following steps:
s1: extracting color moments and information entropy characteristics in various color spaces from the contrast ratio distorted image, and constructing a characteristic set for describing image distortion; the method comprises the following steps:
s11: converting the image from an RGB color space to an XYZ color space, and converting the image from the XYZ color space to a CIELab color space; wherein, three channels of the RGB color space are respectively noted as: r, G, B; the three channels of the CIELab color space are respectively noted as: l, a, b;
the image is converted into an XYZ color space from an RGB color space, and the specific conversion formula is as follows:
Figure FDA0003979931920000011
the image is converted into a CIELab color space from an XYZ color space, and the specific conversion formula is as follows:
Figure FDA0003979931920000012
thus, from each picture, 6 color channel components are obtained, namely: r, G, B and L, a, B;
s12: for is toExtracting the characteristics of the first to third-order central color moments from the 6 color channels obtained in the step S11, and recording the characteristics as
Figure FDA0003979931920000016
Wherein the order of the color moments is represented by i = {1,2,3} and the color channels of the different color spaces are represented by j = { R, G, B, L, a, B }; specifically, the method comprises the following steps:
an image is represented by I, the color moments are multi-order, and are set to 3 here, and the center moment is used for calculation, and the specific calculation formula is as follows:
firstMoment(I)=E(I)
Figure FDA0003979931920000013
Figure FDA0003979931920000014
wherein E is an averaging operator; extracting color moment features of the 6 color channels obtained in the step S11, and recording the color moment features as
Figure FDA0003979931920000015
Wherein the order of the color moments is represented by i = {1,2,3} and the color channels of the different color spaces are represented by j = { R, G, b, L, a, b };
s13: extracting information entropy characteristics of the 6 color channels obtained in the step s11, and recording the information entropy characteristics as H j J = { R, G, b, L, a, b } represents color channels of different color spaces;
the information entropy is used for describing the information complexity of the picture, and the calculation formula is as follows:
Figure FDA0003979931920000021
wherein, P i (I) Representing the probability of the occurrence of a certain pixel with the intensity i in the image; for 6 colors obtained in S11The color channels respectively extract information entropy characteristics, and record the information entropy characteristics as H j J = { R, G, B, L, a, B } represents color channels of different color spaces; therefore, the color moment features and the information entropy features are combined to obtain a feature vector f for describing image contrast distortion:
Figure FDA0003979931920000022
/>
s2: constructing a training set according to the feature set of the image distortion and the prior score, and constructing a prediction model of the image quality evaluation;
s3: and extracting a contrast distortion characteristic set of the image to be evaluated, calculating by using an image quality evaluation prediction model, and predicting the image quality of the image to be evaluated.
2. The method according to claim 1, wherein in step S2, a prediction model for image quality evaluation is constructed by using an SVR model.
3. The method for evaluating the quality of an image without reference contrast distortion according to claim 2, wherein the step S2 specifically comprises the following steps:
s21: combining the image contrast distortion degree distortion characteristic set extracted in the step S1 with the prior score to form a data set;
s22: initializing an SVR model, and setting initial values of all parameters in the SVR model;
s23: the data set obtained in the step S21 is randomly divided into two parts which are respectively used as a training set and a test set in the SVR parameter optimizing process;
s24: optimizing the parameters by using a grid method, and setting the optimized parameters as initial parameters to complete the initialization setting of the model;
s25: and training the initialized SVR model by utilizing the training set and the testing set to obtain a prediction model for evaluating the image quality.
4. The method as claimed in claim 3, wherein in step S21, the prior score is a MOS/DMOS value of each picture in the public data set, and the model is optimized as the prior of picture quality in the training process.
5. The method of claim 3, wherein the data set obtained in step S21 is expressed as { (f) 1 ,Q 1 ,),...,(f k ,Q k ) -where k is the number of distorted images in the dataset;
setting the version of the SVR model in the step S22, and using the v-SVR version based on the RBF kernel function in the LIBSVM package;
in step S23, a Hold-Out partitioning method is used to randomly partition the data set obtained in step S21 into two parts, namely α% and (100- α)%, which are respectively used as a training set and a test set in the SVR parameter optimization process, where the training set is used to train the model and the test set is used to evaluate the generalization ability of the model;
in step S24, a grid method is used to complete parameter optimization of the SVR model, and the parameters to be optimized are (C, γ, epsilon); the parameters are required to be set to be (cmin, cmax, gmin, gmax, v, cstep, gstep, msestrep), wherein cmin and cmax respectively represent the minimum value and the maximum value of the parameter C; gmin and gmax are the maximum and minimum values of the parameter γ, respectively; v represents a parameter of SVM cross validation; cstep, gstep and msestrep are respectively the progress step length of three parameters (C, gamma, epsilon); the output of the grid optimizing method is the initial value of the SVR model parameters (C, gamma, epsilon).
6. The method according to claim 3, wherein the step S3 specifically comprises the following steps:
s31: extracting a contrast distortion descriptor of the image to be evaluated by adopting the method in the step S1, and constructing a feature set for an SVR prediction model;
s32: and inputting the feature set into the trained prediction model, calculating to obtain a prediction result, and finishing the evaluation of the image quality of the image to be evaluated.
CN201910872439.6A 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method Active CN110570420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910872439.6A CN110570420B (en) 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910872439.6A CN110570420B (en) 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method

Publications (2)

Publication Number Publication Date
CN110570420A CN110570420A (en) 2019-12-13
CN110570420B true CN110570420B (en) 2023-04-07

Family

ID=68780171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910872439.6A Active CN110570420B (en) 2019-09-16 2019-09-16 No-reference contrast distortion image quality evaluation method

Country Status (1)

Country Link
CN (1) CN110570420B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192258A (en) * 2020-01-02 2020-05-22 广州大学 Image quality evaluation method and device
CN111652854B (en) * 2020-05-13 2022-08-26 中山大学 No-reference image quality evaluation method based on image high-frequency information
CN112257711B (en) * 2020-10-26 2021-04-09 哈尔滨市科佳通用机电股份有限公司 Method for detecting damage fault of railway wagon floor
CN112446878B (en) * 2021-01-04 2023-03-14 天津科技大学 Color image quality evaluation method based on joint entropy
CN112446879B (en) * 2021-01-06 2022-09-23 天津科技大学 Contrast distortion image quality evaluation method based on image entropy
CN113436167B (en) * 2021-06-25 2022-04-26 湖南工商大学 No-reference color image quality evaluation method based on deep learning and visual perception

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322733A (en) * 2018-01-17 2018-07-24 宁波大学 It is a kind of without refer to high dynamic range images method for evaluating objective quality
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 Non-reference picture quality appraisement method based on full convolutional neural networks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101080846B1 (en) * 2010-03-30 2011-11-07 중앙대학교 산학협력단 Apparatus and method for color distortion correction of image by estimate of correction matrix
CN106600597B (en) * 2016-12-22 2019-04-12 华中科技大学 It is a kind of based on local binary patterns without reference color image quality evaluation method
CN107610093B (en) * 2017-08-02 2020-09-25 西安理工大学 Full-reference image quality evaluation method based on similarity feature fusion
CN109218716B (en) * 2018-10-22 2020-11-06 天津大学 No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN109871852B (en) * 2019-01-05 2023-05-26 天津大学 No-reference tone mapping image quality evaluation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322733A (en) * 2018-01-17 2018-07-24 宁波大学 It is a kind of without refer to high dynamic range images method for evaluating objective quality
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 Non-reference picture quality appraisement method based on full convolutional neural networks

Also Published As

Publication number Publication date
CN110570420A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110570420B (en) No-reference contrast distortion image quality evaluation method
Wang et al. Reduced-reference quality assessment of screen content images
Ni et al. Gradient direction for screen content image quality assessment
US10297000B2 (en) High dynamic range image information hiding method
Gu et al. The analysis of image contrast: From quality assessment to automatic enhancement
Sazzad et al. No reference image quality assessment for JPEG2000 based on spatial features
Zhang et al. Rethinking noise synthesis and modeling in raw denoising
Zhang et al. Fine-grained quality assessment for compressed images
Tian et al. A multi-order derivative feature-based quality assessment model for light field image
CN101529495A (en) Image mask generation
CN109218716B (en) No-reference tone mapping image quality evaluation method based on color statistics and information entropy
Chen et al. Naturalization module in neural networks for screen content image quality assessment
CN111724310B (en) Training method of image restoration model, image restoration method and device
CN113763296A (en) Image processing method, apparatus and medium
Cheng et al. Image quality assessment using natural image statistics in gradient domain
Yang et al. Full reference image quality assessment by considering intra-block structure and inter-block texture
US20190304072A1 (en) Method and apparatus for converting low dynamic range video to high dynamic range video
Pinson Why no reference metrics for image and video quality lack accuracy and reproducibility
Plutino et al. Work memories in Super 8: searching a frame quality metric for movie restoration assessment
Tuama et al. Source camera model identification using features from contaminated sensor noise
Yang et al. No‐reference image quality assessment via structural information fluctuation
Zhang et al. Perceptual quality assessment for fine-grained compressed images
Wang et al. Low-light Images In-the-wild: A Novel Visibility Perception-guided Blind Quality Indicator
Naccari et al. Natural scenes classification for color enhancement
Weingerl et al. Development of a machine learning model for extracting image prominent colors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant