CN110717892A - Tone mapping image quality evaluation method - Google Patents

Tone mapping image quality evaluation method Download PDF

Info

Publication number
CN110717892A
CN110717892A CN201910881340.2A CN201910881340A CN110717892A CN 110717892 A CN110717892 A CN 110717892A CN 201910881340 A CN201910881340 A CN 201910881340A CN 110717892 A CN110717892 A CN 110717892A
Authority
CN
China
Prior art keywords
pixel
pixel points
components
value
tone mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910881340.2A
Other languages
Chinese (zh)
Other versions
CN110717892B (en
Inventor
邵枫
王雪津
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Frontier Digital Technology Co.,Ltd.
Shanghai Ruishenglian Information Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201910881340.2A priority Critical patent/CN110717892B/en
Publication of CN110717892A publication Critical patent/CN110717892A/en
Application granted granted Critical
Publication of CN110717892B publication Critical patent/CN110717892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tone mapping image quality evaluation method, which comprises the steps of taking the influence of bright area characteristics and dark area characteristics on tone mapping into consideration in a training stage, extracting bright and dark area characteristic vectors of a tone mapping image, extracting area contrast characteristic vectors of the tone mapping image, forming a global characteristic vector, training the global characteristic vectors of all tone mapping images in a training image set by using support vector regression, and constructing a quality prediction model; in the testing stage, the objective quality predicted value of the tone mapping image is obtained through calculating the global feature vector of the tone mapping image used for testing and predicting according to the quality prediction model constructed in the training stage, and because the obtained global feature vector information has stronger stability and can better reflect the quality change condition of the tone mapping image, the correlation between the objective evaluation result and the subjective perception is effectively improved.

Description

Tone mapping image quality evaluation method
Technical Field
The invention relates to an image quality evaluation method, in particular to a tone mapping image quality evaluation method.
Background
With the rapid development of display technologies, High Dynamic Range (HDR) images have received increasing attention. The high dynamic range image has rich levels, and can achieve the light and shadow effect which is far closer to the reality than the common image. However, conventional display devices can only support low dynamic range display output. To solve the contradiction between the mismatch of the dynamic range of the real scene and the conventional display device, many Tone Mapping (Tone Mapping) algorithms for high dynamic range images are proposed. The aim of the tone mapping algorithm of the high dynamic range image is to compress the brightness of the high dynamic range image to a range acceptable by the traditional display equipment, meanwhile, the detail information of the original image is kept as much as possible, and image flaws are avoided. Therefore, how to accurately and objectively evaluate the performance of different tone mapping methods plays an important role in guiding content production and post-processing.
However, for the tone-mapped image quality evaluation, if the existing image quality evaluation method is directly applied to the tone-mapped image, the tone-mapped image has only a high dynamic range image as a reference, and thus the objective evaluation value cannot be accurately predicted. Therefore, how to effectively extract visual features in the evaluation process to enable objective evaluation results to be more perceptually accordant with the human visual system is a problem to be researched and solved in the process of carrying out objective quality evaluation on tone mapping images.
Disclosure of Invention
The invention aims to provide a tone mapping image quality evaluation method which can effectively improve the correlation between objective evaluation results and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a tone mapping image quality evaluation method is characterized by comprising a training stage and a testing stage;
the specific steps of the training phase process are as follows:
① _1, selecting N tone mapping images with width W and height H to form a training image set, and recording the kth tone mapping image in the training image set as
Figure BDA0002205963670000021
Wherein N is a positive integer, N is more than 1, k is a positive integer, the initial value of k is 1, and k is more than or equal to 1 and less than or equal to N;
① _2, dividing each tone mapping image in the training image set into light area, dark area and normal area
Figure BDA0002205963670000022
The bright area, the dark area and the normal area are correspondingly recorded asAnd
Figure BDA0002205963670000024
① _3 calculating a light and dark region feature vector for each tone-mapped image in the training image set based on the light and dark regions of each tone-mapped image in the training image set, the method comprising
Figure BDA0002205963670000025
The light and dark region feature vector is recorded as
Figure BDA0002205963670000026
And calculating the regional contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region of each tone mapping image in the training image set, and calculating the contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region
Figure BDA0002205963670000027
Is noted as the area contrast feature vector
Figure BDA0002205963670000028
Wherein the content of the first and second substances,
Figure BDA0002205963670000029
has a dimension of 3 x 1,
Figure BDA00022059636700000210
has a dimension of 8 × 1;
① _4, forming a global feature vector by the bright and dark region feature vector and the region contrast feature vector of each tone mapping image in the training image setIs noted as Fk
Figure BDA00022059636700000212
Wherein, FkHas a dimension of 11X 1, symbol "[ 2 ]]"is a vector representing a symbol and,
Figure BDA00022059636700000213
show that
Figure BDA00022059636700000214
And
Figure BDA00022059636700000215
connected to form a vector;
① _5, forming a training sample data set by the global feature vectors and the average subjective score difference values of all tone mapping images in the training image set, wherein the training sample data set comprises N global feature vectors and N average subjective score difference values, then adopting support vector regression as a machine learning method to train all global feature vectors in the training sample data set, so that the error between the regression function value obtained by training and the average subjective score difference value is minimum, and the optimal weight vector obtained by fitting is obtainedAnd an optimal bias termThen using the optimal weight vector
Figure BDA00022059636700000218
And an optimal bias termStructural quality prediction model, as
Figure BDA00022059636700000220
Figure BDA00022059636700000221
Wherein the content of the first and second substances,
Figure BDA00022059636700000222
in functional representation, F is used to represent the global feature vector of the tone-mapped image, and as input vector to the quality prediction model,
Figure BDA00022059636700000223
is composed of
Figure BDA00022059636700000224
The transpose of (a) is performed,
Figure BDA00022059636700000225
is a linear function of F;
the test stage process comprises the following specific steps:
② for any tone-mapped image I used as a testtestAccording to the same operations from step ① _2 to step ① _4, I is obtainedtestGlobal feature vector of (2), noted as Ftest(ii) a Then, according to the quality prediction model pair F constructed in the training stagetestTesting and predicting to obtain FtestCorresponding predicted value is taken as ItestThe predicted objective quality value of (2) is marked as Qtest
Figure BDA0002205963670000031
Wherein, ItestHas a width W 'and a height H', FtestHas a dimension of 11 x 1,
Figure BDA0002205963670000032
is represented by FtestIs a linear function of (a).
Said step ① _2
Figure BDA0002205963670000033
And
Figure BDA0002205963670000034
the acquisition process comprises the following steps:
① _2a, will
Figure BDA0002205963670000035
R, G, and B components in the RGB color space are expressed as
Figure BDA0002205963670000036
Figure BDA0002205963670000037
Then calculate
Figure BDA0002205963670000038
Is recorded as a dark channel image
Figure BDA0002205963670000039
Will be provided with
Figure BDA00022059636700000310
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as Wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, min () is a function for taking the minimum value, Cx,yRepresenting a set of coordinate positions of all pixel points within a 3 × 3 neighborhood range centered on a pixel point whose coordinate position is (x, y), (x1,y1) Is Cx,yAny one of the coordinate positions of (a) and (b),
Figure BDA00022059636700000313
to represent
Figure BDA00022059636700000314
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure BDA00022059636700000315
to represent
Figure BDA00022059636700000316
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure BDA00022059636700000317
to representThe middle coordinate position is (x)1,y1) The pixel value of the pixel point of (1);
① _2b, calculation
Figure BDA00022059636700000319
Distribution of gray histogram of (1), noted as { hk(j) J is more than or equal to 1 and less than or equal to 256 }; then will { h }k(j) The coordinate of the node with the minimum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XminWill { h }k(j) The coordinate of the node with the maximum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XmaxWill be
Figure BDA00022059636700000320
The middle pixel value belongs to [ Xmin,Xmid]The set of pixel values of all the pixels in the range is recorded as omega1Will be
Figure BDA00022059636700000321
Middle pixel value belongs to (X)mid,Xmax]The set of pixel values of all the pixels in the range is recorded as omega2(ii) a Wherein j is a positive integer, j is more than or equal to 1 and less than or equal to 256, and hk(j) Represents { h }k(j) J is more than or equal to 1 and less than or equal to 256, the histogram value of the node with the coordinate of j,
Figure BDA0002205963670000041
symbol
Figure BDA0002205963670000042
Is a rounding down operation symbol;
① _2c, by maximizing Ω1Obtain a first threshold, denoted as X1 *
Figure BDA0002205963670000043
And by maximizing omega2Obtain a second threshold, denoted as X2 *Wherein the content of the first and second substances,
Figure BDA0002205963670000045
express the finding such thatX when the value of (A) is maximum1Value of (A), X1Is omega1Of any one pixel value, Pf(X1) Represents omega1In (A) is [ X ]min,X1) Probability density function, mu, of all pixel values within a rangef(X1) Represents omega1In (A) is [ X ]min,X1) Mean, σ, of all pixel values in the rangef(X1) Represents omega1In (A) is [ X ]min,X1) Standard deviation, μ, of all pixel values within a rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]Mean, σ, of all pixel values in the rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]The standard deviation of all pixel values within the range,
Figure BDA0002205963670000047
express the finding such that
Figure BDA0002205963670000048
X when the value of (A) is maximum2Value of (A), X2Is omega2Of any one pixel value, Pf(X2) Represents omega2In (A) is [ X ]mid,X2) Probability density function, mu, of all pixel values within a rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Mean, σ, of all pixel values in the rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Standard deviation, μ, of all pixel values within a rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Mean, σ, of all pixel values in the rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Standard deviation of all pixel values within the range;
① _2d, will
Figure BDA0002205963670000051
Middle pixel value belongs to (X)2 *,Xmax]The area formed by all pixel points in the range is determined as a bright area
Figure BDA0002205963670000052
Will be provided withThe middle pixel value belongs to [ Xmin,X1 *) The area formed by all pixel points in the range is determined as a dark areaWill be provided with
Figure BDA0002205963670000055
The middle pixel value belongs to [ X1 *,X2 *]The area formed by all pixel points in the range is determined as a normal area
Figure BDA0002205963670000056
Said step ① _3
Figure BDA0002205963670000057
The acquisition process comprises the following steps:
① _3a 1. A. the method comprises the steps of
Figure BDA0002205963670000058
From the RGB color space to the CIELAB color space,
Figure BDA0002205963670000059
the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
① _3b1, will
Figure BDA00022059636700000510
Dividing into M non-overlapping sub-blocks of size 8 × 8 if
Figure BDA00022059636700000511
The sub-blocks with the size of 8 multiplied by 8 can not be equally divided, and redundant pixel points are removed; then will be
Figure BDA00022059636700000512
The brightness components of all pixel points in each sub-block form a matrix with dimension of 8 x 8, and the matrix is divided into two parts
Figure BDA00022059636700000513
The matrix with dimension of 8 multiplied by 8 formed by the brightness components of all the pixel points in the t-th sub-block is marked as zt(ii) a Wherein M is a positive integer, M is more than 1, t is a positive integer, the initial value of t is 1, and t is more than or equal to 1 and less than or equal to M;
① _3c1, pairPerforming two-dimensional discrete cosine transform on a matrix with dimension of 8 multiplied by 8 and formed by the brightness components of all pixel points in each sub-block to obtain a corresponding discrete cosine transform coefficient matrix, and converting z into ztThe corresponding matrix of discrete cosine transform coefficients is denoted as Zt(ii) a However, the device is not suitable for use in a kitchenPost-calculation
Figure BDA00022059636700000515
The sum of all high frequency coefficients and all intermediate frequency coefficients in the discrete cosine transform coefficient matrix corresponding to each sub-block in the Z-transform coefficient matrix is obtainedtThe sum of all high frequency coefficients and all medium frequency coefficients in (1) is denoted as St(ii) a Wherein Z istHas a dimension of 8 × 8;
① _3d1, calculation
Figure BDA00022059636700000516
Is characterized by
Figure BDA00022059636700000517
① _3e1, calculation
Figure BDA00022059636700000519
The mean and standard deviation of the brightness components of all the pixels in (1) are correspondingly recorded as
Figure BDA00022059636700000520
And
Figure BDA00022059636700000521
① _3f1, will
Figure BDA00022059636700000522
Andvectors constructed in a sequential arrangement as
Figure BDA00022059636700000524
Figure BDA0002205963670000061
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure BDA0002205963670000062
show that
Figure BDA0002205963670000063
And
Figure BDA0002205963670000064
connected to form a vector.
Said step ① _3
Figure BDA0002205963670000065
The acquisition process comprises the following steps:
① _3a 2. A. the method comprises the steps of
Figure BDA0002205963670000066
From the RGB color space to the CIELAB color space,the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
① _3b2, calculationSum of luminance components of all pixel points in
Figure BDA0002205963670000069
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA00022059636700000610
And calculate
Figure BDA00022059636700000612
Sum of luminance components of all pixel points in
Figure BDA00022059636700000613
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure BDA00022059636700000614
Figure BDA00022059636700000615
Wherein the symbol "|" is an absolute value symbol,
Figure BDA00022059636700000616
to represent
Figure BDA00022059636700000617
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000618
to representThe standard deviation of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000620
to represent
Figure BDA00022059636700000621
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000622
to represent
Figure BDA00022059636700000623
The standard deviation of the brightness components of all the pixel points in the image is xi, which is a control parameter;
① _3c2, calculation
Figure BDA00022059636700000624
Sum of luminance components of all pixel points in
Figure BDA00022059636700000625
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA00022059636700000626
Figure BDA00022059636700000627
And calculateSum of luminance components of all pixel points in
Figure BDA00022059636700000629
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure BDA00022059636700000630
Figure BDA00022059636700000631
Wherein the content of the first and second substances,
Figure BDA00022059636700000632
to represent
Figure BDA00022059636700000633
The average of the luminance components of all the pixel points in (a),to represent
Figure BDA00022059636700000635
The standard deviation of the luminance components of all the pixel points in (1);
① _3d2, calculationFirst chrominance component sum of all pixel points in
Figure BDA00022059636700000637
Of all the pixel pointsFirst regional contrast of the first chrominance component, denoted
Figure BDA0002205963670000071
Figure BDA0002205963670000072
And calculate
Figure BDA0002205963670000073
First chrominance component sum of all pixel points in
Figure BDA0002205963670000074
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA0002205963670000075
Wherein the content of the first and second substances,to represent
Figure BDA0002205963670000078
The average of the first chrominance components of all the pixels in (a),
Figure BDA0002205963670000079
to representThe standard deviation of the first chrominance components of all the pixel points in (a),
Figure BDA00022059636700000711
to represent
Figure BDA00022059636700000712
The average of the first chrominance components of all the pixels in (a),
Figure BDA00022059636700000713
to represent
Figure BDA00022059636700000714
The standard deviation of the first chrominance components of all the pixel points in (1);
① _3e2, calculation
Figure BDA00022059636700000715
First chrominance component sum of all pixel points inThe first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700000717
Figure BDA00022059636700000718
And calculate
Figure BDA00022059636700000719
First chrominance component sum of all pixel points in
Figure BDA00022059636700000720
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700000721
Wherein the content of the first and second substances,
Figure BDA00022059636700000723
to represent
Figure BDA00022059636700000724
The average of the first chrominance components of all the pixels in (a),
Figure BDA00022059636700000725
to represent
Figure BDA00022059636700000726
The standard deviation of the first chrominance components of all the pixel points in (1);
① _3f2, will
Figure BDA00022059636700000727
Vectors constructed in a sequential arrangement as
Figure BDA00022059636700000728
Figure BDA00022059636700000729
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure BDA00022059636700000730
show that
Figure BDA00022059636700000731
Figure BDA00022059636700000732
And
Figure BDA00022059636700000733
connected to form a vector.
Compared with the prior art, the invention has the advantages that:
the method takes the influence of bright area characteristics and dark area characteristics on tone mapping into consideration, extracts bright and dark area characteristic vectors of a tone mapping image, extracts area contrast characteristic vectors of the tone mapping image at the same time, then constructs a global characteristic vector, and trains the global characteristic vectors of all tone mapping images in a training image set by using support vector regression to construct a quality prediction model; in the testing stage, the objective quality predicted value of the tone mapping image is obtained through calculating the global feature vector of the tone mapping image used for testing and predicting according to the quality prediction model constructed in the training stage, and because the obtained global feature vector information has stronger stability and can better reflect the quality change condition of the tone mapping image, the correlation between the objective evaluation result and the subjective perception is effectively improved.
Drawings
Fig. 1 is a block diagram of the overall implementation of the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The general implementation block diagram of the tone mapping image quality evaluation method provided by the invention is shown in fig. 1, and the method comprises two processes, namely a training stage and a testing stage;
the specific steps of the training phase process are as follows:
① _1, selecting N tone mapping images with width W and height H to form a training image set, and recording the kth tone mapping image in the training image set as
Figure BDA00022059636700000814
Wherein, N is a positive integer, N is more than 1, if N is 1000, k is a positive integer, the initial value of k is 1, and k is more than or equal to 1 and less than or equal to N.
① _2, dividing each tone mapping image in the training image set into light area, dark area and normal area
Figure BDA0002205963670000081
The bright area, the dark area and the normal area are correspondingly recorded as
Figure BDA0002205963670000082
And
Figure BDA0002205963670000083
in this embodiment, step ① _2
Figure BDA0002205963670000084
And
Figure BDA0002205963670000085
the acquisition process comprises the following steps:
① _2a, will
Figure BDA0002205963670000086
R, G, and B components in the RGB color space are expressed as
Figure BDA0002205963670000087
Figure BDA0002205963670000088
Then calculate
Figure BDA0002205963670000089
Is recorded as a dark channel image
Figure BDA00022059636700000810
Will be provided with
Figure BDA00022059636700000811
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure BDA00022059636700000812
Figure BDA00022059636700000813
Wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, min () is a function for taking the minimum value, Cx,yRepresenting a set of coordinate positions of all pixel points within a 3 × 3 neighborhood range centered on a pixel point whose coordinate position is (x, y), (x1,y1) Is Cx,yAny one of the coordinate positions of (a) and (b),to represent
Figure BDA0002205963670000092
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure BDA0002205963670000093
to represent
Figure BDA0002205963670000094
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),to represent
Figure BDA0002205963670000096
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (1).
① _2b, calculation
Figure BDA0002205963670000097
Distribution of gray histogram of (1), noted as { hk(j) J is more than or equal to 1 and less than or equal to 256 }; then will { h }k(j) The coordinate of the node with the minimum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XminWill { h }k(j) The coordinate of the node with the maximum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XmaxWill be
Figure BDA0002205963670000098
The middle pixel value belongs to [ Xmin,Xmid]The set of pixel values of all the pixels in the range is recorded as omega1Will be
Figure BDA0002205963670000099
Middle pixel value belongs to (X)mid,Xmax]The set of pixel values of all the pixels in the range is recorded as omega2(ii) a Wherein j is a positive integer, j is more than or equal to 1 and less than or equal to 256, and hk(j) Represents { h }k(j) J is more than or equal to 1 and less than or equal to 256, the histogram value of the node with the coordinate of j,symbol
Figure BDA00022059636700000911
To round the operator down.
① _2c, by maximizing Ω1Obtain a first threshold, denoted as X1 *
Figure BDA00022059636700000912
And by maximizing omega2Obtain a second threshold, denoted as X2*,Wherein the content of the first and second substances,
Figure BDA00022059636700000914
express the finding such that
Figure BDA00022059636700000915
X when the value of (A) is maximum1Value of (A), X1Is omega1Of any one pixel value, Pf(X1) Represents omega1In (A) is [ X ]min,X1) Probability density function, mu, of all pixel values within a rangef(X1) Represents omega1In (A) is [ X ]min,X1) Mean, σ, of all pixel values in the rangef(X1) Represents omega1In (A) is [ X ]min,X1) Standard deviation, μ, of all pixel values within a rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]Mean, σ, of all pixel values in the rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]The standard deviation of all pixel values within the range,
Figure BDA0002205963670000101
express the finding such that
Figure BDA0002205963670000102
X when the value of (A) is maximum2Value of (A), X2Is omega2Of any one pixel value, Pf(X2) Represents omega2In (A) is [ X ]mid,X2) Probability density function, mu, of all pixel values within a rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Mean, σ, of all pixel values in the rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Standard deviation, μ, of all pixel values within a rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Mean, σ, of all pixel values in the rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Standard deviation of all pixel values within the range.
① _2d, will
Figure BDA0002205963670000103
Middle pixel value belongs to (X)2 *,Xmax]The area formed by all pixel points in the range is determined as a bright area
Figure BDA0002205963670000104
Will be provided withThe middle pixel value belongs to [ Xmin,X1 *) The area formed by all pixel points in the range is determined as a dark area
Figure BDA0002205963670000106
Will be provided withThe middle pixel value belongs to [ X1 *,X2 *]The area formed by all pixel points in the range is determined as a normal area
Figure BDA0002205963670000108
① _3 calculating a light and dark region feature vector for each tone-mapped image in the training image set based on the light and dark regions of each tone-mapped image in the training image set, the method comprising
Figure BDA0002205963670000109
The light and dark region feature vector is recorded as
Figure BDA00022059636700001010
And calculating the regional contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region of each tone mapping image in the training image set, and calculating the contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region
Figure BDA00022059636700001011
Is noted as the area contrast feature vector
Figure BDA00022059636700001012
Wherein the content of the first and second substances,
Figure BDA00022059636700001013
has a dimension of 3 x 1,
Figure BDA00022059636700001014
dimension (d) is 8 × 1.
In this embodiment, step ① _3
Figure BDA00022059636700001015
The acquisition process comprises the following steps:
① _3a 1. A. the method comprises the steps of
Figure BDA00022059636700001016
From the RGB color space to the CIELAB color space,the three components in the CIELAB color space are the luma component, the first chroma component (referred to as component a) and the second chroma component (referred to as component b), respectively.
① _3b1, will
Figure BDA0002205963670000111
Dividing into M non-overlapping sub-blocks of size 8 × 8 if
Figure BDA0002205963670000112
The subblocks with the size of 8 multiplied by 8 can not be equally divided, redundant pixel points are removed, namely the redundant pixel points are not considered; then will be
Figure BDA0002205963670000113
The brightness components of all pixel points in each sub-block form a matrix with dimension of 8 x 8, and the matrix is divided into two parts
Figure BDA0002205963670000114
The matrix with dimension of 8 multiplied by 8 formed by the brightness components of all the pixel points in the t-th sub-block is marked as zt(ii) a Wherein M is a positive integer, M is more than 1, t is a positive integer, the initial value of t is 1, and t is more than or equal to 1 and less than or equal to M.
① _3c1, pair
Figure BDA0002205963670000115
Performing two-dimensional discrete cosine transform on a matrix with dimension of 8 multiplied by 8 and formed by the brightness components of all pixel points in each sub-block to obtain a corresponding discrete cosine transform coefficient matrix, and converting z into ztThe corresponding matrix of discrete cosine transform coefficients is denoted as Zt(ii) a Then calculate
Figure BDA0002205963670000116
The sum of all high frequency coefficients and all intermediate frequency coefficients in the discrete cosine transform coefficient matrix corresponding to each sub-block in the Z-transform coefficient matrix is obtainedtThe sum of all high frequency coefficients and all medium frequency coefficients in (1) is denoted as St(ii) a Wherein Z istThe dimension of (2) is 8 multiplied by 8, the upper left corner part in the discrete cosine transform coefficient matrix is direct current and low frequency coefficients, the lower right corner part is high frequency coefficients, and the middle part is intermediate frequency coefficients.
① _3d1, calculationIs characterized by
Figure BDA0002205963670000118
Figure BDA0002205963670000119
① _3e1, calculation
Figure BDA00022059636700001110
The mean and standard deviation of the brightness components of all the pixels in (1) are correspondingly recorded as
Figure BDA00022059636700001111
And
Figure BDA00022059636700001112
① _3f1, will
Figure BDA00022059636700001113
And
Figure BDA00022059636700001114
vectors constructed in a sequential arrangement as Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure BDA00022059636700001117
show that
Figure BDA00022059636700001118
And
Figure BDA00022059636700001119
connected to form a vector.
In this embodiment, step ① _3
Figure BDA00022059636700001120
The acquisition process comprises the following steps:
① _3a 2. A. the method comprises the steps of
Figure BDA00022059636700001121
From the RGB color space to the CIELAB color space,
Figure BDA00022059636700001122
the three components in the CIELAB color space are the luma component, the first chroma component (referred to as component a) and the second chroma component (referred to as component b), respectively.
① _3b2, calculation
Figure BDA0002205963670000121
Sum of luminance components of all pixel points in
Figure BDA0002205963670000122
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA0002205963670000123
And calculate
Figure BDA0002205963670000125
Sum of luminance components of all pixel points in
Figure BDA0002205963670000126
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure BDA0002205963670000127
Figure BDA0002205963670000128
Wherein the symbol "|" is an absolute value symbol,
Figure BDA0002205963670000129
to represent
Figure BDA00022059636700001210
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001211
to represent
Figure BDA00022059636700001212
The standard deviation of the luminance components of all the pixel points in (a),to represent
Figure BDA00022059636700001214
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001215
to representThe standard deviation of the luminance components of all the pixels in (1), ξ is a control parameter, which is 10 in this embodiment-6
① _3c2, calculationSum of luminance components of all pixel points in
Figure BDA00022059636700001218
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA00022059636700001219
Figure BDA00022059636700001220
And calculateSum of luminance components of all pixel points in
Figure BDA00022059636700001222
Of all pixel points inRegional contrast, is
Figure BDA00022059636700001223
Figure BDA00022059636700001224
Wherein the content of the first and second substances,to represent
Figure BDA00022059636700001226
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001227
to represent
Figure BDA00022059636700001228
The standard deviation of the luminance components of all the pixel points in (1).
① _3d2, calculation
Figure BDA00022059636700001229
First chrominance component sum of all pixel points in
Figure BDA00022059636700001230
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001231
And calculate
Figure BDA00022059636700001233
First chrominance component sum of all pixel points in
Figure BDA00022059636700001234
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA0002205963670000132
Wherein the content of the first and second substances,
Figure BDA0002205963670000133
to representThe average of the first chrominance components of all the pixels in (a),
Figure BDA0002205963670000135
to represent
Figure BDA0002205963670000136
The standard deviation of the first chrominance components of all the pixel points in (a),
Figure BDA0002205963670000137
to represent
Figure BDA0002205963670000138
The average of the first chrominance components of all the pixels in (a),
Figure BDA0002205963670000139
to represent
Figure BDA00022059636700001310
The standard deviation of the first chrominance components of all the pixel points in (1).
① _3e2, calculation
Figure BDA00022059636700001311
First chrominance component sum of all pixel points in
Figure BDA00022059636700001312
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001313
Figure BDA00022059636700001314
And calculate
Figure BDA00022059636700001315
First chrominance component sum of all pixel points inThe second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001317
Figure BDA00022059636700001318
Wherein the content of the first and second substances,
Figure BDA00022059636700001319
to represent
Figure BDA00022059636700001320
The average of the first chrominance components of all the pixels in (a),
Figure BDA00022059636700001321
to represent
Figure BDA00022059636700001322
The standard deviation of the first chrominance components of all the pixel points in (1).
① _3f2, will
Figure BDA00022059636700001323
Vectors constructed in a sequential arrangement as
Figure BDA00022059636700001324
Figure BDA00022059636700001325
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure BDA00022059636700001326
show that
Figure BDA00022059636700001327
Figure BDA00022059636700001328
Andconnected to form a vector.
① _4, forming a global feature vector by the bright and dark region feature vector and the region contrast feature vector of each tone mapping image in the training image set
Figure BDA00022059636700001330
Is noted as Fk
Figure BDA00022059636700001331
Wherein, FkHas a dimension of 11X 1, symbol "[ 2 ]]"is a vector representing a symbol and,
Figure BDA00022059636700001332
show that
Figure BDA00022059636700001333
Andconnected to form a vector.
① _5, forming a training sample data set by the global feature vectors and the average subjective score difference values of all tone mapping images in the training image set, wherein the training sample data set comprises N global feature vectors and N average subjective score difference values, and then training all global feature vectors in the training sample data set by adopting support vector regression as a machine learning method, so that the error between the regression function value obtained by training and the average subjective score difference value is causedThe difference is minimum, and the optimal weight vector is obtained by fitting
Figure BDA0002205963670000141
And an optimal bias term
Figure BDA0002205963670000142
Then using the optimal weight vector
Figure BDA0002205963670000143
And an optimal bias term
Figure BDA0002205963670000144
Structural quality prediction model, as
Figure BDA0002205963670000145
Figure BDA0002205963670000146
Wherein the content of the first and second substances,in functional representation, F is used to represent the global feature vector of the tone-mapped image, and as input vector to the quality prediction model,is composed of
Figure BDA0002205963670000149
The transpose of (a) is performed,as a linear function of F.
The test stage process comprises the following specific steps:
② for any tone-mapped image I used as a testtestAccording to the same operations from step ① _2 to step ① _4, I is obtainedtestGlobal feature vector of (2), noted as Ftest(ii) a Then, according to the quality prediction model pair F constructed in the training stagetestTesting and predicting to obtain FtestCorresponding predicted value is taken as ItestThe objective quality prediction value of (1) is recorded asWherein, ItestHas a width W 'and a height H', FtestHas a dimension of 11 x 1,
Figure BDA00022059636700001412
is represented by FtestIs a linear function of (a).
In the present embodiment, as the tone mapping image database, the TMID database established by the university of luugu, canada and the ESPL-LIVE database established by the austin division, university of texas, usa were used, and the TMID database included 120 tone mapping images and the ESPL-LIVE database included 1811 tone mapping images. 2 common objective parameters of the evaluation method for evaluating the image quality are used as evaluation indexes, namely Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) under the condition of nonlinear regression. The higher PLCC and SROCC indicate the better correlation between the evaluation results of the method of the invention and the difference of the average subjective scores. Table 1 shows the correlation between the objective quality prediction value obtained by the method of the present invention and the mean subjective score difference.
TABLE 1 correlation between objective quality prediction values and mean subjective score differences obtained by the method of the invention
Database with a plurality of databases PLCC SROCC
TMID 0.827 0.758
ESPL-LIVE 0.658 0.660
As can be seen from Table 1, the correlation between the objective quality prediction value of the tone mapping image obtained by the method of the present invention and the average subjective score difference is very high, which indicates that the objective evaluation result is more consistent with the result of human eye subjective perception, and is sufficient to illustrate the effectiveness of the method of the present invention.

Claims (4)

1. A tone mapping image quality evaluation method is characterized by comprising a training stage and a testing stage;
the specific steps of the training phase process are as follows:
① _1, selecting N tone mapping images with width W and height H to form a training image set, and recording the kth tone mapping image in the training image set as
Figure FDA0002205963660000011
Wherein N is a positive integer, N is more than 1, k is a positive integer, the initial value of k is 1, and k is more than or equal to 1 and less than or equal to N;
① _2, dividing each tone mapping image in the training image set into light area, dark area and normal area
Figure FDA0002205963660000012
The bright area, the dark area and the normal area are correspondingly recorded as
Figure FDA0002205963660000013
And
Figure FDA0002205963660000014
① _3 calculating a light and dark region feature vector for each tone-mapped image in the training image set based on the light and dark regions of each tone-mapped image in the training image set, the method comprisingThe light and dark region feature vector is recorded as
Figure FDA0002205963660000016
And calculating the regional contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region of each tone mapping image in the training image set, and calculating the contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region
Figure FDA0002205963660000017
Is noted as the area contrast feature vector
Figure FDA0002205963660000018
Wherein the content of the first and second substances,
Figure FDA0002205963660000019
has a dimension of 3 x 1,has a dimension of 8 × 1;
① _4, forming a global feature vector by the bright and dark region feature vector and the region contrast feature vector of each tone mapping image in the training image setIs noted as Fk
Figure FDA00022059636600000112
Wherein, FkHas a dimension of 11X 1, symbol "[ 2 ]]"is a vector representing a symbol and,
Figure FDA00022059636600000113
show that
Figure FDA00022059636600000114
And
Figure FDA00022059636600000115
connected to form a vector;
① _5, forming a training sample data set by the global feature vectors and the average subjective score difference values of all tone mapping images in the training image set, wherein the training sample data set comprises N global feature vectors and N average subjective score difference values, then adopting support vector regression as a machine learning method to train all global feature vectors in the training sample data set, so that the error between the regression function value obtained by training and the average subjective score difference value is minimum, and the optimal weight vector obtained by fitting is obtained
Figure FDA00022059636600000116
And an optimal bias term
Figure FDA00022059636600000117
Then using the optimal weight vector
Figure FDA00022059636600000118
And an optimal bias term
Figure FDA00022059636600000119
Structural quality prediction model, as
Figure FDA00022059636600000120
Figure FDA00022059636600000121
Wherein the content of the first and second substances,in functional representation, F is used in tablesA global feature vector representing the tone-mapped image, and as an input vector to a quality prediction model,
Figure FDA0002205963660000022
is composed of
Figure FDA0002205963660000023
The transpose of (a) is performed,is a linear function of F;
the test stage process comprises the following specific steps:
② for any tone-mapped image I used as a testtestAccording to the same operations from step ① _2 to step ① _4, I is obtainedtestGlobal feature vector of (2), noted as Ftest(ii) a Then, according to the quality prediction model pair F constructed in the training stagetestTesting and predicting to obtain FtestCorresponding predicted value is taken as ItestThe predicted objective quality value of (2) is marked as Qtest
Figure FDA0002205963660000025
Wherein, ItestHas a width W 'and a height H', FtestHas a dimension of 11 x 1,
Figure FDA0002205963660000026
is represented by FtestIs a linear function of (a).
2. The method for evaluating the quality of a tone-mapped image according to claim 1, wherein said step ① _2 is
Figure FDA0002205963660000027
And
Figure FDA0002205963660000028
the acquisition process comprises the following steps:
① _2a, will
Figure FDA0002205963660000029
R, G, and B components in the RGB color space are expressed as
Figure FDA00022059636600000210
Figure FDA00022059636600000211
Then calculate
Figure FDA00022059636600000212
Is recorded as a dark channel image
Figure FDA00022059636600000213
Will be provided with
Figure FDA00022059636600000214
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure FDA00022059636600000215
Wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, min () is a function for taking the minimum value, Cx,yRepresenting a set of coordinate positions of all pixel points within a 3 × 3 neighborhood range centered on a pixel point whose coordinate position is (x, y), (x1,y1) Is Cx,yAny one of the coordinate positions of (a) and (b),
Figure FDA00022059636600000217
to represent
Figure FDA00022059636600000218
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure FDA00022059636600000219
to represent
Figure FDA00022059636600000220
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure FDA00022059636600000221
to represent
Figure FDA00022059636600000222
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (1);
① _2b, calculation
Figure FDA00022059636600000223
Distribution of gray histogram of (1), noted as { hk(j) J is more than or equal to 1 and less than or equal to 256 }; then will { h }k(j) The coordinate of the node with the minimum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XminWill { h }k(j) The coordinate of the node with the maximum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XmaxWill be
Figure FDA0002205963660000031
The middle pixel value belongs to [ Xmin,Xmid]The set of pixel values of all the pixels in the range is recorded as omega1Will be
Figure FDA0002205963660000032
Middle pixel value belongs to (X)mid,Xmax]The set of pixel values of all the pixels in the range is recorded as omega2(ii) a Wherein j is a positive integer, j is more than or equal to 1 and less than or equal to 256, and hk(j) Represents { h }k(j) J is more than or equal to 1 and less than or equal to 256, the histogram value of the node with the coordinate of j,
Figure FDA0002205963660000033
symbol
Figure FDA0002205963660000034
Is a rounding down operation symbol;
① _2c, by maximizing Ω1Obtain a first threshold, denoted as X1 *
Figure FDA0002205963660000035
And by maximizing omega2Obtain a second threshold, denoted as X2 *
Figure FDA0002205963660000036
Wherein the content of the first and second substances,express the finding such thatX when the value of (A) is maximum1Value of (A), X1Is omega1Of any one pixel value, Pf(X1) Represents omega1In (A) is [ X ]min,X1) Probability density function, mu, of all pixel values within a rangef(X1) Represents omega1In (A) is [ X ]min,X1) Mean, σ, of all pixel values in the rangef(X1) Represents omega1In (A) is [ X ]min,X1) Standard deviation, μ, of all pixel values within a rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]Mean, σ, of all pixel values in the rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]The standard deviation of all pixel values within the range,
Figure FDA0002205963660000039
express the finding such that
Figure FDA00022059636600000310
X when the value of (A) is maximum2Value of (A), X2Is omega2Of any one pixel value, Pf(X2) Represents omega2In (A) is [ X ]mid,X2) Probability density function, mu, of all pixel values within a rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Mean, σ, of all pixel values in the rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Standard deviation, μ, of all pixel values within a rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Mean, σ, of all pixel values in the rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Standard deviation of all pixel values within the range;
① _2d, will
Figure FDA0002205963660000041
Middle pixel value belongs to (X)2 *,Xmax]The area formed by all pixel points in the range is determined as a bright areaWill be provided with
Figure FDA0002205963660000043
The middle pixel value belongs to [ Xmin,X1 *) The area formed by all pixel points in the range is determined as a dark area
Figure FDA0002205963660000044
Will be provided with
Figure FDA0002205963660000045
The middle pixel value belongs to [ X1 *,X2 *]The area formed by all pixel points in the range is determined as a normal area
Figure FDA0002205963660000046
3. A tone mapping image quality evaluation method according to claim 1 or 2, characterized in that in said step ① _3
Figure FDA0002205963660000047
The acquisition process comprises the following steps:
① _3a 1. A. the method comprises the steps of
Figure FDA0002205963660000048
From the RGB color space to the CIELAB color space,the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
① _3b1, willDividing into M non-overlapping sub-blocks of size 8 × 8 if
Figure FDA00022059636600000411
The sub-blocks with the size of 8 multiplied by 8 can not be equally divided, and redundant pixel points are removed; then will be
Figure FDA00022059636600000412
The brightness components of all pixel points in each sub-block form a matrix with dimension of 8 x 8, and the matrix is divided into two partsThe matrix with dimension of 8 multiplied by 8 formed by the brightness components of all the pixel points in the t-th sub-block is marked as zt(ii) a Wherein M is a positive integer, M is more than 1, t is a positive integer, the initial value of t is 1, and t is more than or equal to 1 and less than or equal to M;
① _3c1, pair
Figure FDA00022059636600000414
Performing two-dimensional discrete cosine transform on a matrix with dimension of 8 multiplied by 8 and formed by the brightness components of all pixel points in each sub-block to obtain a corresponding discrete cosine transform coefficient matrix, and converting z into ztThe corresponding matrix of discrete cosine transform coefficients is denoted as Zt(ii) a Then calculate
Figure FDA00022059636600000415
The sum of all high frequency coefficients and all intermediate frequency coefficients in the discrete cosine transform coefficient matrix corresponding to each sub-block in the Z-transform coefficient matrix is obtainedtThe sum of all high frequency coefficients and all medium frequency coefficients in (1) is denoted as St(ii) a Wherein Z istHas a dimension of 8 × 8;
① _3d1, calculation
Figure FDA0002205963660000051
Is characterized by
Figure FDA0002205963660000052
Figure FDA0002205963660000053
① _3e1, calculation
Figure FDA0002205963660000054
The mean and standard deviation of the brightness components of all the pixels in (1) are correspondingly recorded as
Figure FDA0002205963660000055
And
Figure FDA0002205963660000056
① _3f1, will
Figure FDA0002205963660000057
And
Figure FDA0002205963660000058
vectors constructed in a sequential arrangement as
Figure FDA0002205963660000059
Figure FDA00022059636600000510
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure FDA00022059636600000511
show that
Figure FDA00022059636600000512
And
Figure FDA00022059636600000513
connected to form a vector.
4. The method for evaluating the quality of a tone-mapped image according to claim 3, wherein said step ① _3 is
Figure FDA00022059636600000514
The acquisition process comprises the following steps:
① _3a 2. A. the method comprises the steps of
Figure FDA00022059636600000515
From the RGB color space to the CIELAB color space,
Figure FDA00022059636600000516
the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
① _3b2, calculation
Figure FDA00022059636600000517
Sum of luminance components of all pixel points in
Figure FDA00022059636600000518
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure FDA00022059636600000519
Figure FDA00022059636600000520
And calculate
Figure FDA00022059636600000521
Sum of luminance components of all pixel points in
Figure FDA00022059636600000522
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure FDA00022059636600000523
Figure FDA00022059636600000524
Wherein the symbol "|" is an absolute value symbol,
Figure FDA00022059636600000525
to represent
Figure FDA00022059636600000526
The average of the luminance components of all the pixel points in (a),to represent
Figure FDA00022059636600000528
The standard deviation of the luminance components of all the pixel points in (a),to represent
Figure FDA00022059636600000530
The average of the luminance components of all the pixel points in (a),to represent
Figure FDA00022059636600000532
The standard deviation of the brightness components of all the pixel points in the image is xi, which is a control parameter;
① _3c2, calculation
Figure FDA00022059636600000533
Sum of luminance components of all pixel points in
Figure FDA00022059636600000534
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure FDA0002205963660000061
And calculateSum of luminance components of all pixel points in
Figure FDA0002205963660000064
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure FDA0002205963660000065
Wherein the content of the first and second substances,
Figure FDA0002205963660000067
to represent
Figure FDA0002205963660000068
The average of the luminance components of all the pixel points in (a),
Figure FDA0002205963660000069
to represent
Figure FDA00022059636600000610
The standard deviation of the luminance components of all the pixel points in (1);
① _3d2, calculation
Figure FDA00022059636600000611
First chrominance component sum of all pixel points in
Figure FDA00022059636600000612
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00022059636600000613
Figure FDA00022059636600000614
And calculate
Figure FDA00022059636600000615
First chrominance component sum of all pixel points in
Figure FDA00022059636600000616
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00022059636600000617
Figure FDA00022059636600000618
Wherein the content of the first and second substances,
Figure FDA00022059636600000619
to represent
Figure FDA00022059636600000620
The average of the first chrominance components of all the pixels in (a),
Figure FDA00022059636600000621
to represent
Figure FDA00022059636600000622
The standard deviation of the first chrominance components of all the pixel points in (a),
Figure FDA00022059636600000623
to represent
Figure FDA00022059636600000624
The average of the first chrominance components of all the pixels in (a),to represent
Figure FDA00022059636600000626
The standard deviation of the first chrominance components of all the pixel points in (1);
① _3e2, calculation
Figure FDA00022059636600000627
First chrominance component sum of all pixel points in
Figure FDA00022059636600000628
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00022059636600000629
Figure FDA00022059636600000630
And calculate
Figure FDA00022059636600000631
First chrominance component sum of all pixel points inThe second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00022059636600000633
Wherein the content of the first and second substances,to represent
Figure FDA00022059636600000636
The average of the first chrominance components of all the pixels in (a),to represent
Figure FDA00022059636600000638
The standard deviation of the first chrominance components of all the pixel points in (1);
① _3f2, will
Figure FDA0002205963660000071
Vectors constructed in a sequential arrangement as
Figure FDA0002205963660000072
Figure FDA0002205963660000073
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure FDA0002205963660000074
show that
Figure FDA0002205963660000075
Figure FDA0002205963660000076
And
Figure FDA0002205963660000077
connected to form a vector.
CN201910881340.2A 2019-09-18 2019-09-18 Tone mapping image quality evaluation method Active CN110717892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910881340.2A CN110717892B (en) 2019-09-18 2019-09-18 Tone mapping image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910881340.2A CN110717892B (en) 2019-09-18 2019-09-18 Tone mapping image quality evaluation method

Publications (2)

Publication Number Publication Date
CN110717892A true CN110717892A (en) 2020-01-21
CN110717892B CN110717892B (en) 2022-06-28

Family

ID=69209939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910881340.2A Active CN110717892B (en) 2019-09-18 2019-09-18 Tone mapping image quality evaluation method

Country Status (1)

Country Link
CN (1) CN110717892B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950596A (en) * 2021-03-09 2021-06-11 宁波大学 Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer
CN113362354A (en) * 2021-05-07 2021-09-07 安徽国际商务职业学院 Method, system, terminal and storage medium for evaluating quality of tone mapping image
CN116630447A (en) * 2023-07-24 2023-08-22 成都海风锐智科技有限责任公司 Weather prediction method based on image processing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540048A (en) * 2009-04-21 2009-09-23 北京航空航天大学 Image quality evaluating method based on support vector machine
CN105741328A (en) * 2016-01-22 2016-07-06 西安电子科技大学 Shot image quality evaluation method based on visual perception
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
CN107105223A (en) * 2017-03-20 2017-08-29 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN107172418A (en) * 2017-06-08 2017-09-15 宁波大学 A kind of tone scale map image quality evaluating method analyzed based on exposure status
KR101846743B1 (en) * 2016-11-28 2018-04-09 연세대학교 산학협력단 Objective quality assessment method and apparatus for tone mapped images
US20180247396A1 (en) * 2015-08-25 2018-08-30 Thomson Licensing Inverse tone mapping based on luminance zones
US20190281201A1 (en) * 2016-11-03 2019-09-12 Interdigital Ce Patent Holdings Method and device for estimating cast shadow regions and/or highlight regions in images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540048A (en) * 2009-04-21 2009-09-23 北京航空航天大学 Image quality evaluating method based on support vector machine
US20180247396A1 (en) * 2015-08-25 2018-08-30 Thomson Licensing Inverse tone mapping based on luminance zones
CN105741328A (en) * 2016-01-22 2016-07-06 西安电子科技大学 Shot image quality evaluation method based on visual perception
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
US20190281201A1 (en) * 2016-11-03 2019-09-12 Interdigital Ce Patent Holdings Method and device for estimating cast shadow regions and/or highlight regions in images
KR101846743B1 (en) * 2016-11-28 2018-04-09 연세대학교 산학협력단 Objective quality assessment method and apparatus for tone mapped images
CN107105223A (en) * 2017-03-20 2017-08-29 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN107172418A (en) * 2017-06-08 2017-09-15 宁波大学 A kind of tone scale map image quality evaluating method analyzed based on exposure status

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEN P, LI L, ZHANG X, ET AL: "Blind quality index for tone-mapped images based on luminance partition", 《PATTERN RECOGNITION》 *
P. MOHAMMADI, M. T. POURAZAD AND P. NASIOPOULOS: "An Entropy-based Inverse Tone Mapping Operator for High Dynamic Range Applications", 《2018 9TH IFIP INTERNATIONAL CONFERENCE ON NEW TECHNOLOGIES, MOBILITY AND SECURITY (NTMS)》 *
SAMANI, A. , K. PANETTA , AND S. AGAIAN: "Transform domain measure of enhancement — TDME — For security imaging applications", 《IEEE INTERNATIONAL CONFERENCE ON TECHNOLOGIES FOR HOMELAND SECURITY IEEE, 2014》 *
陈雪鑫,卜庆凯: "基于改进的最大类间方差法的水果图像识别研究", 《青海大学学报(工程技术版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950596A (en) * 2021-03-09 2021-06-11 宁波大学 Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer
CN112950596B (en) * 2021-03-09 2023-06-02 宁波大学 Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels
CN113362354A (en) * 2021-05-07 2021-09-07 安徽国际商务职业学院 Method, system, terminal and storage medium for evaluating quality of tone mapping image
CN113362354B (en) * 2021-05-07 2024-04-30 安徽国际商务职业学院 Quality evaluation method, system, terminal and storage medium for tone mapping image
CN116630447A (en) * 2023-07-24 2023-08-22 成都海风锐智科技有限责任公司 Weather prediction method based on image processing
CN116630447B (en) * 2023-07-24 2023-10-20 成都海风锐智科技有限责任公司 Weather prediction method based on image processing

Also Published As

Publication number Publication date
CN110717892B (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN108428227B (en) No-reference image quality evaluation method based on full convolution neural network
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN110717892B (en) Tone mapping image quality evaluation method
CN102209257B (en) Stereo image quality objective evaluation method
CN102547368B (en) Objective evaluation method for quality of stereo images
CN108074239B (en) No-reference image quality objective evaluation method based on prior perception quality characteristic diagram
CN109218716B (en) No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN107105223B (en) A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN105574901B (en) A kind of general non-reference picture quality appraisement method based on local contrast pattern
CN110120034B (en) Image quality evaluation method related to visual perception
CN112950596B (en) Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels
CN104376565A (en) Non-reference image quality evaluation method based on discrete cosine transform and sparse representation
Narasimhan et al. A comparison of contrast enhancement techniques in poor illuminated gray level and color images
CN114598864A (en) Full-reference ultrahigh-definition video quality objective evaluation method based on deep learning
Wang et al. Screen content image quality assessment with edge features in gradient domain
CN110910347A (en) Image segmentation-based tone mapping image no-reference quality evaluation method
CN108010023B (en) High dynamic range image quality evaluation method based on tensor domain curvature analysis
Shao et al. Multistage pooling for blind quality prediction of asymmetric multiply-distorted stereoscopic images
CN106683079A (en) No-reference image objective quality evaluation method based on structural distortion
CN107292331B (en) Based on unsupervised feature learning without reference screen image quality evaluating method
CN106960432B (en) A kind of no reference stereo image quality evaluation method
CN114067006B (en) Screen content image quality evaluation method based on discrete cosine transform
Makandar et al. Color image analysis and contrast stretching using histogram equalization
CN108171704B (en) No-reference image quality evaluation method based on excitation response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231226

Address after: 315000 North of First Floor, Fifth Avenue, No. 719 Zhongxing Road, Yinzhou District, Ningbo City, Zhejiang Province

Patentee after: Ningbo Frontier Digital Technology Co.,Ltd.

Address before: 200120 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: Shanghai ruishenglian Information Technology Co.,Ltd.

Effective date of registration: 20231226

Address after: 200120 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: Shanghai ruishenglian Information Technology Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right