CN110717892B - Tone mapping image quality evaluation method - Google Patents

Tone mapping image quality evaluation method Download PDF

Info

Publication number
CN110717892B
CN110717892B CN201910881340.2A CN201910881340A CN110717892B CN 110717892 B CN110717892 B CN 110717892B CN 201910881340 A CN201910881340 A CN 201910881340A CN 110717892 B CN110717892 B CN 110717892B
Authority
CN
China
Prior art keywords
pixel
pixel points
tone mapping
components
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910881340.2A
Other languages
Chinese (zh)
Other versions
CN110717892A (en
Inventor
邵枫
王雪津
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Frontier Digital Technology Co.,Ltd.
Shanghai Ruishenglian Information Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201910881340.2A priority Critical patent/CN110717892B/en
Publication of CN110717892A publication Critical patent/CN110717892A/en
Application granted granted Critical
Publication of CN110717892B publication Critical patent/CN110717892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention discloses a tone mapping image quality evaluation method, which comprises the steps of taking the influence of bright area characteristics and dark area characteristics on tone mapping into consideration in a training stage, extracting bright and dark area characteristic vectors of a tone mapping image, extracting area contrast characteristic vectors of the tone mapping image, forming a global characteristic vector, training the global characteristic vectors of all tone mapping images in a training image set by using support vector regression, and constructing a quality prediction model; in the testing stage, the objective quality predicted value of the tone mapping image is obtained through calculating the global feature vector of the tone mapping image used for testing and predicting according to the quality prediction model constructed in the training stage, and because the obtained global feature vector information has stronger stability and can better reflect the quality change condition of the tone mapping image, the correlation between the objective evaluation result and the subjective perception is effectively improved.

Description

Tone mapping image quality evaluation method
Technical Field
The invention relates to an image quality evaluation method, in particular to a tone mapping image quality evaluation method.
Background
With the rapid development of display technologies, High Dynamic Range (HDR) images have received increasing attention. The high dynamic range image has rich levels, and can achieve the light and shadow effect which is far closer to the reality than the common image. However, conventional display devices can only support low dynamic range display output. To solve the contradiction between the mismatch of the dynamic range of the real scene and the conventional display device, many Tone Mapping (Tone Mapping) algorithms for high dynamic range images are proposed. The aim of the tone mapping algorithm of the high dynamic range image is to compress the brightness of the high dynamic range image to a range acceptable by the traditional display equipment, meanwhile, the detail information of the original image is kept as much as possible, and image flaws are avoided. Therefore, how to accurately and objectively evaluate the performance of different tone mapping methods plays an important role in guiding content production and post-processing.
However, for the tone-mapped image quality evaluation, if the existing image quality evaluation method is directly applied to the tone-mapped image, the tone-mapped image has only a high dynamic range image as a reference, and thus the objective evaluation value cannot be accurately predicted. Therefore, how to effectively extract visual features in the evaluation process to enable objective evaluation results to be more perceptually accordant with the human visual system is a problem to be researched and solved in the process of carrying out objective quality evaluation on tone mapping images.
Disclosure of Invention
The invention aims to provide a tone mapping image quality evaluation method which can effectively improve the correlation between objective evaluation results and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a tone mapping image quality evaluation method is characterized by comprising a training stage and a testing stage;
the training stage process comprises the following specific steps:
firstly, 1, selecting N tone mapping images with the width of W and the height of H to form a training image set, and marking the kth tone mapping image in the training image set as a tone mapping image
Figure BDA0002205963670000021
Wherein, the first and the second end of the pipe are connected with each other,n is a positive integer, N is more than 1, k is a positive integer, the initial value of k is 1, and k is more than or equal to 1 and less than or equal to N;
firstly, 2, carrying out area division on each tone mapping image in the training image set, dividing each tone mapping image into a bright area, a dark area and a normal area, and dividing each tone mapping image into a bright area, a dark area and a normal area
Figure BDA0002205963670000022
The bright area, the dark area and the normal area are correspondingly recorded as
Figure BDA0002205963670000023
And
Figure BDA0002205963670000024
calculating a bright and dark region characteristic vector of each tone mapping image in the training image set according to a bright region and a dark region of each tone mapping image in the training image set, and calculating the characteristic vector of each tone mapping image in the training image set according to the bright region and the dark region
Figure BDA0002205963670000025
The light and dark region feature vector is recorded as
Figure BDA0002205963670000026
And calculating the regional contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region of each tone mapping image in the training image set, and calculating the contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region
Figure BDA0002205963670000027
Is noted as
Figure BDA0002205963670000028
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002205963670000029
has a dimension of 3 x 1 (a),
Figure BDA00022059636700000210
dimension (d) is 8 × 1;
(I _ 4) collecting each training imageThe light and dark region feature vectors and the region contrast feature vectors of the tone-mapped image form a global feature vector
Figure BDA00022059636700000211
Is denoted as Fk
Figure BDA00022059636700000212
Wherein, FkHas a dimension of 11X 1, symbol "[ 2 ]]"is a vector representing a symbol and,
Figure BDA00022059636700000213
show that
Figure BDA00022059636700000214
And
Figure BDA00022059636700000215
connected to form a vector;
firstly, 5, forming a training sample data set by global feature vectors and average subjective score difference values of all tone mapping images in the training image set, wherein the training sample data set comprises N global feature vectors and N average subjective score difference values; then, a support vector regression is adopted as a machine learning method to train all global feature vectors in the training sample data set, so that the error between a regression function value obtained through training and the average subjective score difference is minimum, and the optimal weight vector is obtained through fitting
Figure BDA00022059636700000216
And an optimal bias term
Figure BDA00022059636700000217
Then using the optimal weight vector
Figure BDA00022059636700000218
And an optimal bias term
Figure BDA00022059636700000219
Structural quality prediction model, as
Figure BDA00022059636700000220
Figure BDA00022059636700000221
Wherein the content of the first and second substances,
Figure BDA00022059636700000222
in functional representation, F is used to represent the global feature vector of the tone-mapped image, and as input vector to the quality prediction model,
Figure BDA00022059636700000223
Is composed of
Figure BDA00022059636700000224
The method (2) is implemented by the following steps,
Figure BDA00022059636700000225
is a linear function of F;
the test stage process comprises the following specific steps:
for any tone mapped image I used as testtestObtaining I according to the same operation from the step I _2 to the step I _4testGlobal feature vector of (2), noted as Ftest(ii) a Then, according to the quality prediction model pair F constructed in the training stagetestTesting and predicting to obtain FtestCorresponding predicted value is taken as ItestThe predicted objective quality value of (2) is marked as Qtest
Figure BDA0002205963670000031
Wherein, ItestHas a width W 'and a height H', FtestHas a dimension of 11 x 1,
Figure BDA0002205963670000032
is represented by FtestIs a linear function of (a).
Said step (i _ 2)
Figure BDA0002205963670000033
And
Figure BDA0002205963670000034
the acquisition process comprises the following steps:
(ii) (-) 2 a)
Figure BDA0002205963670000035
R, G, and B components in the RGB color space are expressed as
Figure BDA0002205963670000036
Figure BDA0002205963670000037
Then calculate
Figure BDA0002205963670000038
Is recorded as a dark channel image
Figure BDA0002205963670000039
Will be provided with
Figure BDA00022059636700000310
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure BDA00022059636700000311
Figure BDA00022059636700000312
Wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, min () is a function for taking the minimum value, Cx,yRepresenting a set of coordinate positions of all pixel points within a 3 × 3 neighborhood range centered on a pixel point whose coordinate position is (x, y), (x1,y1) Is Cx,yAny one of the coordinate positions of (a) and (b),
Figure BDA00022059636700000313
to represent
Figure BDA00022059636700000314
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure BDA00022059636700000315
to represent
Figure BDA00022059636700000316
The middle coordinate position is (x) 1,y1) The pixel value of the pixel point of (a),
Figure BDA00022059636700000317
represent
Figure BDA00022059636700000318
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (1);
(r-2 b) calculation
Figure BDA00022059636700000319
Distribution of gray histogram of (1), noted as { hk(j) J is more than or equal to 1 and less than or equal to 256 }; then will { h }k(j) The coordinate of the node with the minimum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XminWill { h }k(j) The coordinate of the node with the maximum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XmaxWill be
Figure BDA00022059636700000320
The middle pixel value belongs to [ Xmin,Xmid]The set of pixel values of all the pixels in the range is recorded as omega1Will be
Figure BDA00022059636700000321
Middle pixel value belongs to (X)mid,Xmax]The set of pixel values of all the pixels in the range is recorded as omega2(ii) a Wherein j is a positive integer, j is more than or equal to 1 and less than or equal to 256, and hk(j) Represents { h }k(j) J is more than or equal to 1 and less than or equal to 256, the histogram value of the node with the coordinate of j,
Figure BDA0002205963670000041
(symbol)
Figure BDA0002205963670000042
is a rounding down operation symbol;
r 2c, by maximizing omega1Obtain a first threshold, denoted as X1 *
Figure BDA0002205963670000043
And by maximizing Ω2Obtain a second threshold, denoted as X2 *
Figure BDA0002205963670000044
Wherein the content of the first and second substances,
Figure BDA0002205963670000045
express the finding such that
Figure BDA0002205963670000046
X when the value of (A) is maximum1Value of (A), X1Is omega1Of any one pixel value, Pf(X1) Represents omega1In (A) is [ X ]min,X1) Probability density function, mu, of all pixel values in the rangef(X1) Represents omega1In (A) is [ X ] min,X1) Mean, σ, of all pixel values in the rangef(X1) Represents Ω1In (A) is [ X ]min,X1) Standard deviation, μ, of all pixel values within a rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]Mean, σ, of all pixel values in the rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]The standard deviation of all pixel values within the range,
Figure BDA0002205963670000047
express the finding such that
Figure BDA0002205963670000048
X when the value of (A) is maximum2Value of (A), X2Is omega2Of any one pixel value, Pf(X2) Represents omega2In (A) is [ X ]mid,X2) Probability density function, mu, of all pixel values within a rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Mean, σ, of all pixel values in the rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Standard deviation, μ, of all pixel values within a rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Mean, σ, of all pixel values in the rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Standard deviation of all pixel values within the range;
(ii) (-) 2 d)
Figure BDA0002205963670000051
Middle pixel value belongs to (X)2 *,Xmax]The area formed by all pixel points in the range is determined as a bright area
Figure BDA0002205963670000052
Will be provided with
Figure BDA0002205963670000053
The middle pixel value belongs to [ Xmin,X1 *) The area formed by all pixel points in the range is determined as a dark area
Figure BDA0002205963670000054
Will be provided with
Figure BDA0002205963670000055
The middle pixel value belongs to [ X1 *,X2 *]The area formed by all pixel points in the range is determined as a normal area
Figure BDA0002205963670000056
In the step (r _ 3)Is/are as follows
Figure BDA0002205963670000057
The acquisition process comprises the following steps:
r 3a1
Figure BDA0002205963670000058
From the RGB color space to the CIELAB color space,
Figure BDA0002205963670000059
The three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
(r-3 b 1) providing
Figure BDA00022059636700000510
Dividing into M non-overlapping sub-blocks of size 8 × 8 if
Figure BDA00022059636700000511
The sub-blocks with the size of 8 multiplied by 8 can not be equally divided, and redundant pixel points are removed; then will be
Figure BDA00022059636700000512
The brightness components of all the pixel points in each sub-block form a matrix with dimension of 8 x 8, and the matrix is divided into two parts
Figure BDA00022059636700000513
The matrix with dimension of 8 multiplied by 8 formed by the brightness components of all the pixel points in the t-th sub-block is marked as zt(ii) a Wherein M is a positive integer, M is more than 1, t is a positive integer, the initial value of t is 1, and t is more than or equal to 1 and less than or equal to M;
r 3c1 pair
Figure BDA00022059636700000514
Performing two-dimensional discrete cosine transform on a matrix with dimension of 8 multiplied by 8 and formed by the brightness components of all pixel points in each sub-block to obtain a corresponding discrete cosine transform coefficient matrix, and converting z into ztThe corresponding matrix of discrete cosine transform coefficients is denoted as Zt(ii) a Then calculate
Figure BDA00022059636700000515
The sum of all high frequency coefficients and all intermediate frequency coefficients in the discrete cosine transform coefficient matrix corresponding to each sub-block in the Z-transform coefficient matrix is obtainedtThe sum of all high frequency coefficients and all medium frequency coefficients in (1) is denoted as St(ii) a Wherein Z istHas a dimension of 8 × 8;
(r-3 d 1) calculation
Figure BDA00022059636700000516
Is characterized by
Figure BDA00022059636700000517
Figure BDA00022059636700000518
(r-3 e 1) calculation
Figure BDA00022059636700000519
The mean value and the standard deviation of the brightness components of all the pixel points in (1) are correspondingly recorded as
Figure BDA00022059636700000520
And
Figure BDA00022059636700000521
r 3f1
Figure BDA00022059636700000522
And
Figure BDA00022059636700000523
vectors constructed in a sequential arrangement as
Figure BDA00022059636700000524
Figure BDA0002205963670000061
Wherein the symbol "[ alpha ],")]"representing symbols for vectors,
Figure BDA0002205963670000062
Show that
Figure BDA0002205963670000063
And
Figure BDA0002205963670000064
connected to form a vector.
Said step (r-3)
Figure BDA0002205963670000065
The acquisition process comprises the following steps:
r 3a2
Figure BDA0002205963670000066
From the RGB color space to the CIELAB color space,
Figure BDA0002205963670000067
the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
(r-3 b 2) calculation
Figure BDA0002205963670000068
Sum of luminance components of all pixel points in
Figure BDA0002205963670000069
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA00022059636700000610
Figure BDA00022059636700000611
And calculate
Figure BDA00022059636700000612
Sum of luminance components of all pixel points in
Figure BDA00022059636700000613
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure BDA00022059636700000614
Figure BDA00022059636700000615
Wherein the symbol "|" is an absolute value symbol,
Figure BDA00022059636700000616
to represent
Figure BDA00022059636700000617
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000618
to represent
Figure BDA00022059636700000619
The standard deviation of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000620
to represent
Figure BDA00022059636700000621
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000622
to represent
Figure BDA00022059636700000623
The standard deviation of the brightness components of all the pixel points in the image is xi, which is a control parameter;
(r-3 c 2) calculation
Figure BDA00022059636700000624
Sum of luminance components of all pixel points in
Figure BDA00022059636700000625
The first regional contrast of the brightness components of all the pixel points in (1) is recorded as
Figure BDA00022059636700000626
Figure BDA00022059636700000627
And calculate
Figure BDA00022059636700000628
Luminance component sum of all pixel points in
Figure BDA00022059636700000629
The second regional contrast of the brightness components of all the pixel points in (1) is recorded as
Figure BDA00022059636700000630
Figure BDA00022059636700000631
Wherein the content of the first and second substances,
Figure BDA00022059636700000632
to represent
Figure BDA00022059636700000633
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700000634
to represent
Figure BDA00022059636700000635
The standard deviation of the luminance components of all the pixel points in (1);
(r-3 d 2) calculation
Figure BDA00022059636700000636
First chrominance component sum of all pixel points in
Figure BDA00022059636700000637
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA0002205963670000071
Figure BDA0002205963670000072
And calculate
Figure BDA0002205963670000073
First chrominance component sum of all pixel points in
Figure BDA0002205963670000074
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA0002205963670000075
Figure BDA0002205963670000076
Wherein the content of the first and second substances,
Figure BDA0002205963670000077
to represent
Figure BDA0002205963670000078
The average of the first chrominance components of all the pixels in (a),
Figure BDA0002205963670000079
to represent
Figure BDA00022059636700000710
The standard deviation of the first chrominance components of all the pixel points in (a),
Figure BDA00022059636700000711
to represent
Figure BDA00022059636700000712
The average of the first chrominance components of all the pixels in (a),
Figure BDA00022059636700000713
to represent
Figure BDA00022059636700000714
The standard deviation of the first chrominance components of all the pixel points in (1);
(r-3 e 2) calculation
Figure BDA00022059636700000715
First chrominance component sum of all pixel points in
Figure BDA00022059636700000716
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700000717
Figure BDA00022059636700000718
And calculate
Figure BDA00022059636700000719
First chrominance component sum of all pixel points in
Figure BDA00022059636700000720
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700000721
Figure BDA00022059636700000722
Wherein the content of the first and second substances,
Figure BDA00022059636700000723
to represent
Figure BDA00022059636700000724
The average of the first chrominance components of all the pixels in (a),
Figure BDA00022059636700000725
to represent
Figure BDA00022059636700000726
The standard deviation of the first chrominance components of all the pixel points in (1);
r 3f2
Figure BDA00022059636700000727
Vectors constructed in a sequential arrangement as
Figure BDA00022059636700000728
Figure BDA00022059636700000729
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure BDA00022059636700000730
show that
Figure BDA00022059636700000731
Figure BDA00022059636700000732
And
Figure BDA00022059636700000733
connected to form a vector.
Compared with the prior art, the invention has the advantages that:
the method takes the influence of bright area characteristics and dark area characteristics on tone mapping into consideration, extracts bright and dark area characteristic vectors of a tone mapping image, extracts area contrast characteristic vectors of the tone mapping image at the same time, then constructs a global characteristic vector, and trains the global characteristic vectors of all tone mapping images in a training image set by using support vector regression to construct a quality prediction model; in the testing stage, the objective quality predicted value of the tone mapping image is obtained through calculating the global feature vector of the tone mapping image used for testing and predicting according to the quality prediction model constructed in the training stage, and because the obtained global feature vector information has stronger stability and can better reflect the quality change condition of the tone mapping image, the correlation between the objective evaluation result and the subjective perception is effectively improved.
Drawings
Fig. 1 is a block diagram of a general implementation of the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the following examples of the drawings.
The general implementation block diagram of the tone mapping image quality evaluation method provided by the invention is shown in fig. 1, and the method comprises a training stage and a testing stage;
the specific steps of the training phase process are as follows:
firstly, 1, selecting N tone mapping images with width W and height H to form a training image set, and recording the kth tone mapping image in the training image set as
Figure BDA00022059636700000814
Wherein, N is a positive integer, N is more than 1, if N is 1000, k is a positive integer, the initial value of k is 1, and k is more than or equal to 1 and less than or equal to N.
Firstly, 2, dividing each tone mapping image in the training image set into a bright area, a dark area and a normal area, and dividing the bright area, the dark area and the normal area
Figure BDA0002205963670000081
The bright area, the dark area and the normal area are correspondingly recorded as
Figure BDA0002205963670000082
And
Figure BDA0002205963670000083
in this embodiment, in step (r _ 2)
Figure BDA0002205963670000084
And
Figure BDA0002205963670000085
the acquisition process comprises the following steps:
(ii) (-) 2 a)
Figure BDA0002205963670000086
R, G, and B components in the RGB color space are expressed as
Figure BDA0002205963670000087
Figure BDA0002205963670000088
Then calculate
Figure BDA0002205963670000089
Is recorded as a dark channel image
Figure BDA00022059636700000810
Will be provided with
Figure BDA00022059636700000811
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure BDA00022059636700000812
Figure BDA00022059636700000813
Wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, min () is a function for taking the minimum value, C x,yRepresenting a set of coordinate positions of all pixel points within a 3 × 3 neighborhood range centered on a pixel point whose coordinate position is (x, y), (x1,y1) Is Cx,yAny one of the coordinate positions of (a) and (b),
Figure BDA0002205963670000091
to represent
Figure BDA0002205963670000092
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure BDA0002205963670000093
to represent
Figure BDA0002205963670000094
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure BDA0002205963670000095
to represent
Figure BDA0002205963670000096
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (1).
(r-2 b) calculation
Figure BDA0002205963670000097
Distribution of gray histogram of (1), noted as { hk(j) J is more than or equal to 1 and less than or equal to 256 }; then will { h }k(j) The coordinate of the node with the minimum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XminWill { h }k(j) The coordinate of the node with the maximum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XmaxWill be
Figure BDA0002205963670000098
The middle pixel value belongs to [ Xmin,Xmid]The set of pixel values of all the pixels in the range is recorded as omega1Will be
Figure BDA0002205963670000099
Middle pixel value belongs to (X)mid,Xmax]The set of pixel values of all the pixels in the range is recorded as omega2(ii) a Wherein j is a positive integer, j is more than or equal to 1 and less than or equal to 256, and hk(j) Represents { h }k(j) J is more than or equal to 1 and less than or equal to 256, the histogram value of the node with the coordinate of j,
Figure BDA00022059636700000910
(symbol)
Figure BDA00022059636700000911
to round the operator down.
R 2c, by maximizing omega1Obtain a first threshold, denoted as X1 *
Figure BDA00022059636700000912
And by maximizing omega 2Obtain a second threshold, denoted as X2*,
Figure BDA00022059636700000913
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00022059636700000914
express the solution such that
Figure BDA00022059636700000915
X when the value of (A) is maximum1Value of (A), X1Is omega1Of any one pixel value, Pf(X1) Represents omega1In (A) is [ X ]min,X1) Probability density function, mu, of all pixel values within a rangef(X1) Represents omega1In (A) is [ X ]min,X1) Mean, σ, of all pixel values in the rangef(X1) Represents omega1In (A) is [ X ]min,X1) Standard deviation, μ, of all pixel values within a rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]Mean, σ, of all pixel values in the rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]The standard deviation of all pixel values within the range,
Figure BDA0002205963670000101
express the finding such that
Figure BDA0002205963670000102
X when the value of (A) is maximum2Value of (A), X2Is omega2Of any one pixel value, Pf(X2) Watch (A)Show omega2In (A) is [ X ]mid,X2) Probability density function, mu, of all pixel values within a rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Mean, σ, of all pixel values in the rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Standard deviation, μ, of all pixel values within a rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Mean, σ, of all pixel values in the rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Standard deviation of all pixel values within the range.
(ii) (-) 2 d)
Figure BDA0002205963670000103
Middle pixel value belongs to (X)2 *,Xmax]The area formed by all pixel points in the range is determined as a bright area
Figure BDA0002205963670000104
Will be provided with
Figure BDA0002205963670000105
The middle pixel value belongs to [ X min,X1 *) Determining the area formed by all pixel points in the range as a dark area
Figure BDA0002205963670000106
Will be provided with
Figure BDA0002205963670000107
The middle pixel value belongs to [ X1 *,X2 *]The area formed by all pixel points in the range is determined as a normal area
Figure BDA0002205963670000108
Calculating a bright and dark region characteristic vector of each tone mapping image in the training image set according to a bright region and a dark region of each tone mapping image in the training image set, and calculating the characteristic vector of each tone mapping image in the training image set according to the bright region and the dark region
Figure BDA0002205963670000109
The light and dark region feature vector is recorded as
Figure BDA00022059636700001010
And calculating the regional contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region of each tone mapping image in the training image set, and calculating the contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region
Figure BDA00022059636700001011
Is noted as the area contrast feature vector
Figure BDA00022059636700001012
Wherein the content of the first and second substances,
Figure BDA00022059636700001013
has a dimension of 3 x 1,
Figure BDA00022059636700001014
dimension (d) is 8 × 1.
In this embodiment, in step (r _ 3)
Figure BDA00022059636700001015
The acquisition process comprises the following steps:
r 3a1
Figure BDA00022059636700001016
From the RGB color space to the CIELAB color space,
Figure BDA00022059636700001017
the three components in the CIELAB color space are the luma component, the first chroma component (referred to as component a) and the second chroma component (referred to as component b), respectively.
R 3b1
Figure BDA0002205963670000111
Dividing into M non-overlapping sub-blocks of size 8 x 8If, if
Figure BDA0002205963670000112
The subblocks with the size of 8 multiplied by 8 can not be equally divided, redundant pixel points are removed, namely the redundant pixel points are not considered; then will be
Figure BDA0002205963670000113
The brightness components of all pixel points in each sub-block form a matrix with dimension of 8 x 8, and the matrix is divided into two parts
Figure BDA0002205963670000114
The matrix with dimension of 8 multiplied by 8 formed by the brightness components of all the pixel points in the t-th sub-block is marked as zt(ii) a Wherein M is a positive integer, M is more than 1, t is a positive integer, the initial value of t is 1, and t is more than or equal to 1 and less than or equal to M.
R 3c1 pair
Figure BDA0002205963670000115
Performing two-dimensional discrete cosine transform on a matrix with dimension of 8 multiplied by 8 and formed by the brightness components of all pixel points in each sub-block to obtain a corresponding discrete cosine transform coefficient matrix, and converting z into ztThe corresponding matrix of discrete cosine transform coefficients is denoted as Zt(ii) a Then calculate
Figure BDA0002205963670000116
The sum of all high frequency coefficients and all intermediate frequency coefficients in the discrete cosine transform coefficient matrix corresponding to each sub-block in the Z-transform coefficient matrix is obtainedtThe sum of all high frequency coefficients and all medium frequency coefficients in (1) is denoted as St(ii) a Wherein Z istThe dimension of (2) is 8 multiplied by 8, the upper left corner part in the discrete cosine transform coefficient matrix is direct current and low frequency coefficients, the lower right corner part is high frequency coefficients, and the middle part is intermediate frequency coefficients.
(r-3 d 1) calculation
Figure BDA0002205963670000117
Is characterized by
Figure BDA0002205963670000118
Figure BDA0002205963670000119
(r-3 e 1) calculation
Figure BDA00022059636700001110
The mean and standard deviation of the brightness components of all the pixels in (1) are correspondingly recorded as
Figure BDA00022059636700001111
And
Figure BDA00022059636700001112
r 3f1
Figure BDA00022059636700001113
And
Figure BDA00022059636700001114
vectors constructed in a sequential arrangement as
Figure BDA00022059636700001115
Figure BDA00022059636700001116
Wherein the symbol "[ 2 ]]"is a vector representing a symbol and,
Figure BDA00022059636700001117
show that
Figure BDA00022059636700001118
And
Figure BDA00022059636700001119
connected to form a vector.
In this embodiment, in step (r _ 3)
Figure BDA00022059636700001120
Is obtained by:
R 3a2
Figure BDA00022059636700001121
From the RGB color space to the CIELAB color space,
Figure BDA00022059636700001122
the three components in the CIELAB color space are the luma component, the first chroma component (referred to as component a) and the second chroma component (referred to as component b), respectively.
(r-3 b 2) calculation
Figure BDA0002205963670000121
Sum of luminance components of all pixel points in
Figure BDA0002205963670000122
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA0002205963670000123
Figure BDA0002205963670000124
And calculate
Figure BDA0002205963670000125
Sum of luminance components of all pixel points in
Figure BDA0002205963670000126
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure BDA0002205963670000127
Figure BDA0002205963670000128
Wherein the symbol "|" is an absolute value symbol,
Figure BDA0002205963670000129
to represent
Figure BDA00022059636700001210
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001211
to represent
Figure BDA00022059636700001212
The standard deviation of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001213
to represent
Figure BDA00022059636700001214
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001215
to represent
Figure BDA00022059636700001216
The standard deviation of the luminance components of all the pixels in (1), ξ is a control parameter, which is 10 in this embodiment-6
(r-3 c 2) calculation
Figure BDA00022059636700001217
Sum of luminance components of all pixel points in
Figure BDA00022059636700001218
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure BDA00022059636700001219
Figure BDA00022059636700001220
And calculate
Figure BDA00022059636700001221
Luminance component sum of all pixel points in
Figure BDA00022059636700001222
The second regional contrast of the brightness components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001223
Figure BDA00022059636700001224
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00022059636700001225
represent
Figure BDA00022059636700001226
The average of the luminance components of all the pixel points in (a),
Figure BDA00022059636700001227
to represent
Figure BDA00022059636700001228
The standard deviation of the luminance components of all the pixel points in (1).
(r-3 d 2) calculation
Figure BDA00022059636700001229
First chrominance component sum of all pixel points in
Figure BDA00022059636700001230
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001231
Figure BDA00022059636700001232
And calculate
Figure BDA00022059636700001233
First chrominance component sum of all pixel points in
Figure BDA00022059636700001234
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA0002205963670000131
Figure BDA0002205963670000132
Wherein the content of the first and second substances,
Figure BDA0002205963670000133
to represent
Figure BDA0002205963670000134
The average of the first chrominance components of all the pixels in (a),
Figure BDA0002205963670000135
to represent
Figure BDA0002205963670000136
The standard deviation of the first chrominance components of all the pixel points in (a),
Figure BDA0002205963670000137
to represent
Figure BDA0002205963670000138
The average of the first chrominance components of all the pixels in (a),
Figure BDA0002205963670000139
to represent
Figure BDA00022059636700001310
The standard deviation of the first chrominance components of all the pixel points in (1).
(r-3 e 2) calculation
Figure BDA00022059636700001311
First chrominance component sum of all pixel points in
Figure BDA00022059636700001312
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001313
Figure BDA00022059636700001314
And calculate
Figure BDA00022059636700001315
First chrominance component sum of all pixel points in
Figure BDA00022059636700001316
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure BDA00022059636700001317
Figure BDA00022059636700001318
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00022059636700001319
to represent
Figure BDA00022059636700001320
The average of the first chrominance components of all the pixels in (a),
Figure BDA00022059636700001321
to represent
Figure BDA00022059636700001322
The standard deviation of the first chrominance components of all the pixel points in (1).
R 3f2
Figure BDA00022059636700001323
Vectors constructed in a sequential arrangement as
Figure BDA00022059636700001324
Figure BDA00022059636700001325
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure BDA00022059636700001326
show that
Figure BDA00022059636700001327
Figure BDA00022059636700001328
And
Figure BDA00022059636700001329
connected to form a vector.
Firstly, 4, forming a global feature vector by a bright and dark region feature vector and a region contrast feature vector of each tone mapping image in the training image set, and then carrying out color matching on the global feature vector
Figure BDA00022059636700001330
Is noted as Fk
Figure BDA00022059636700001331
Wherein, FkHas a dimension of 11X 1, symbol "[ 2 ]]"is a vector representing a symbol and,
Figure BDA00022059636700001332
show that
Figure BDA00022059636700001333
And
Figure BDA00022059636700001334
connected to form a vector.
Firstly, 5, forming a training sample data set by global feature vectors and average subjective score difference values of all tone mapping images in the training image set, wherein the training sample data set comprises N global feature vectors and N average subjective score difference values;then, a support vector regression is adopted as a machine learning method to train all global feature vectors in the training sample data set, so that the error between a regression function value obtained through training and the average subjective score difference is minimum, and the optimal weight vector is obtained through fitting
Figure BDA0002205963670000141
And an optimum bias term
Figure BDA0002205963670000142
Then using the optimal weight vector
Figure BDA0002205963670000143
And an optimum bias term
Figure BDA0002205963670000144
Structural quality prediction model, as
Figure BDA0002205963670000145
Figure BDA0002205963670000146
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002205963670000147
in functional representation, F is used to represent the global feature vector of the tone-mapped image, and as input vector to the quality prediction model,
Figure BDA0002205963670000148
is composed of
Figure BDA0002205963670000149
The transpose of (a) is performed,
Figure BDA00022059636700001410
as a linear function of F.
The test stage process comprises the following specific steps:
② for any tone mapping image I used as testtestPush-buttonObtaining I according to the same operation from the step (r _ 2) to the step (r _ 4)testGlobal feature vector of (2), noted as Ftest(ii) a Then, according to the quality prediction model pair F constructed in the training stagetestTesting and predicting to obtain FtestCorresponding predicted value is taken as ItestThe objective quality prediction value of (1) is recorded as
Figure BDA00022059636700001411
Wherein, ItestHas a width W 'and a height H', FtestHas a dimension of 11 x 1,
Figure BDA00022059636700001412
is represented by FtestIs a linear function of (a).
In the present embodiment, as the tone mapping image database, the TMID database established by the university of luugu, canada and the ESPL-LIVE database established by the austin division, university of texas, usa were used, and the TMID database included 120 tone mapping images and the ESPL-LIVE database included 1811 tone mapping images. 2 common objective parameters of the evaluation method for evaluating the image quality are used as evaluation indexes, namely Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) under the condition of nonlinear regression. The higher PLCC and SROCC indicate the better correlation between the evaluation results of the method of the invention and the difference of the average subjective scores. Table 1 shows the correlation between the objective quality prediction value obtained by the method of the present invention and the mean subjective score difference.
TABLE 1 correlation between objective quality prediction values and mean subjective score differences obtained by the method of the invention
Database with a plurality of databases PLCC SROCC
TMID 0.827 0.758
ESPL-LIVE 0.658 0.660
As can be seen from Table 1, the correlation between the objective quality prediction value of the tone mapping image obtained by the method of the present invention and the average subjective score difference is very high, which indicates that the objective evaluation result is more consistent with the result of human eye subjective perception, and is sufficient to illustrate the effectiveness of the method of the present invention.

Claims (2)

1. A tone mapping image quality evaluation method is characterized by comprising a training stage and a testing stage;
the specific steps of the training phase process are as follows:
firstly, 1, selecting N tone mapping images with width W and height H to form a training image set, and recording the kth tone mapping image in the training image set as
Figure FDA0003574660220000011
Wherein N is a positive integer, N is more than 1, k is a positive integer, the initial value of k is 1, and k is more than or equal to 1 and less than or equal to N;
firstly, 2, dividing each tone mapping image in the training image set into a bright area, a dark area and a normal area, and dividing the bright area, the dark area and the normal area
Figure FDA0003574660220000012
The bright area, the dark area and the normal area are correspondingly recorded as
Figure FDA0003574660220000013
And
Figure FDA0003574660220000014
calculating a bright and dark region characteristic vector of each tone mapping image in the training image set according to a bright region and a dark region of each tone mapping image in the training image set, and calculating the characteristic vector of each tone mapping image in the training image set according to the bright region and the dark region
Figure FDA0003574660220000015
The light and dark region feature vector of
Figure FDA0003574660220000016
And calculating the regional contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region of each tone mapping image in the training image set, and calculating the contrast characteristic vector of each tone mapping image in the training image set according to the bright region, the dark region and the normal region
Figure FDA0003574660220000017
Is noted as the area contrast feature vector
Figure FDA0003574660220000018
Wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003574660220000019
has a dimension of 3 x 1,
Figure FDA00035746602200000110
has a dimension of 8 × 1;
said step (r-3)
Figure FDA00035746602200000111
The acquisition process comprises the following steps:
r 3a1
Figure FDA00035746602200000112
Conversion from RGB color spaceThe color space of the CIELAB is such that,
Figure FDA00035746602200000113
the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
r 3b1
Figure FDA00035746602200000114
Dividing into M non-overlapping sub-blocks of size 8 × 8 if
Figure FDA00035746602200000115
The sub-blocks with the size of 8 multiplied by 8 can not be equally divided, and redundant pixel points are removed; then will be
Figure FDA00035746602200000116
The brightness components of all pixel points in each sub-block form a matrix with dimension of 8 x 8, and the matrix is divided into two parts
Figure FDA00035746602200000117
The matrix with dimension of 8 multiplied by 8 formed by the brightness components of all the pixel points in the t-th sub-block is marked as zt(ii) a Wherein M is a positive integer, M is more than 1, t is a positive integer, the initial value of t is 1, and t is more than or equal to 1 and less than or equal to M;
r 3c1 pair
Figure FDA00035746602200000118
Performing two-dimensional discrete cosine transform on a matrix with dimension of 8 multiplied by 8 and formed by the brightness components of all pixel points in each sub-block to obtain a corresponding discrete cosine transform coefficient matrix, and performing z-component transform on the matrix tThe corresponding matrix of discrete cosine transform coefficients is denoted as Zt(ii) a Then calculate
Figure FDA0003574660220000021
The sum of all high frequency coefficients and all intermediate frequency coefficients in the discrete cosine transform coefficient matrix corresponding to each sub-block in the Z-transform coefficient matrix is obtained by dividing the Z-transform coefficient matrix into Z-transform coefficientstIs given as S, the sum of all high frequency coefficients and all medium frequency coefficientst(ii) a Wherein, ZtDimension of (d) is 8 x 8;
(r-3 d 1) calculation
Figure FDA0003574660220000022
Is characterized by being
Figure FDA0003574660220000023
(r-3 e 1) calculation
Figure FDA0003574660220000024
The mean and standard deviation of the brightness components of all the pixels in (1) are correspondingly recorded as
Figure FDA0003574660220000025
And
Figure FDA0003574660220000026
r 3f1
Figure FDA0003574660220000027
And
Figure FDA0003574660220000028
vectors constructed in a sequential arrangement as
Figure FDA0003574660220000029
Figure FDA00035746602200000210
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure FDA00035746602200000211
show that
Figure FDA00035746602200000212
And
Figure FDA00035746602200000213
connected to form a vector;
said step (r-3)
Figure FDA00035746602200000214
The acquisition process comprises the following steps:
r 3a2
Figure FDA00035746602200000215
From the RGB color space to the CIELAB color space,
Figure FDA00035746602200000216
the three components in the CIELAB color space are a luma component, a first chroma component, and a second chroma component, respectively;
(r-3 b 2) calculation
Figure FDA00035746602200000217
Sum of luminance components of all pixel points in
Figure FDA00035746602200000218
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure FDA00035746602200000219
And calculate
Figure FDA00035746602200000220
Sum of luminance components of all pixel points in
Figure FDA00035746602200000221
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure FDA00035746602200000222
Figure FDA00035746602200000223
Wherein the symbol "|" is an absolute value symbol,
Figure FDA00035746602200000224
Represent
Figure FDA00035746602200000225
The average of the luminance components of all the pixel points in (a),
Figure FDA00035746602200000226
represent
Figure FDA00035746602200000227
The standard deviation of the luminance components of all the pixel points in (a),
Figure FDA00035746602200000228
to represent
Figure FDA00035746602200000229
The average of the luminance components of all the pixel points in (a),
Figure FDA00035746602200000230
to represent
Figure FDA00035746602200000231
The standard deviation of the brightness components of all the pixel points in the image is xi, which is a control parameter;
(r-3 c 2) calculation
Figure FDA0003574660220000031
Sum of luminance components of all pixel points in
Figure FDA0003574660220000032
The first regional contrast of the brightness components of all the pixels in (1), which is recorded as
Figure FDA0003574660220000033
And calculate
Figure FDA0003574660220000034
Sum of luminance components of all pixel points in
Figure FDA0003574660220000035
The second regional contrast of the brightness components of all the pixel points in (1), is recorded as
Figure FDA0003574660220000036
Figure FDA0003574660220000037
Wherein the content of the first and second substances,
Figure FDA0003574660220000038
to represent
Figure FDA0003574660220000039
The average of the luminance components of all the pixel points in (a),
Figure FDA00035746602200000310
to represent
Figure FDA00035746602200000311
The standard deviation of the luminance components of all the pixel points in (1);
(r-3 d 2) calculation
Figure FDA00035746602200000312
First chrominance component sum of all pixel points in
Figure FDA00035746602200000313
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00035746602200000314
And calculate
Figure FDA00035746602200000315
First chrominance component sum of all pixel points in
Figure FDA00035746602200000316
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00035746602200000317
Wherein the content of the first and second substances,
Figure FDA00035746602200000318
to represent
Figure FDA00035746602200000319
The average of the first chrominance components of all the pixels in (a),
Figure FDA00035746602200000320
to represent
Figure FDA00035746602200000321
The standard deviation of the first chrominance components of all the pixel points in (a),
Figure FDA00035746602200000322
To represent
Figure FDA00035746602200000323
The average of the first chrominance components of all the pixels in (a),
Figure FDA00035746602200000324
to represent
Figure FDA00035746602200000325
All ofA standard deviation of a first chrominance component of the pixel;
(r-3 e 2) calculation
Figure FDA00035746602200000326
First chrominance component sum of all pixel points in
Figure FDA00035746602200000327
The first regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00035746602200000328
And calculate
Figure FDA00035746602200000329
First chrominance component sum of all pixel points in
Figure FDA00035746602200000330
The second regional contrast of the first chrominance components of all the pixel points in (1) is recorded as
Figure FDA00035746602200000331
Wherein the content of the first and second substances,
Figure FDA00035746602200000332
to represent
Figure FDA00035746602200000333
The average of the first chrominance components of all the pixels in (a),
Figure FDA0003574660220000041
to represent
Figure FDA0003574660220000042
The standard deviation of the first chrominance components of all the pixel points in (1);
r 3f2
Figure FDA0003574660220000043
Vectors constructed in a sequential arrangement as
Figure FDA0003574660220000044
Wherein the symbol "[ alpha ],")]"is a vector representing a symbol and,
Figure FDA0003574660220000045
show that
Figure FDA0003574660220000046
Figure FDA0003574660220000047
And
Figure FDA0003574660220000048
connected to form a vector;
firstly, 4, forming a global feature vector by a bright and dark region feature vector and a region contrast feature vector of each tone mapping image in the training image set, and then carrying out color matching on the global feature vector
Figure FDA0003574660220000049
Is noted as Fk
Figure FDA00035746602200000410
Wherein, FkHas a dimension of 11X 1, symbol "[ 2 ]]"is a vector representing a symbol and,
Figure FDA00035746602200000411
show that
Figure FDA00035746602200000412
And
Figure FDA00035746602200000413
connected to form a vector;
(r-5) to trainThe global feature vectors and the average subjective score difference values of all tone mapping images in the training image set form a training sample data set, and the training sample data set comprises N global feature vectors and N average subjective score difference values; then, a support vector regression is adopted as a machine learning method to train all global feature vectors in the training sample data set, so that the error between a regression function value obtained through training and the average subjective score difference is minimum, and the optimal weight vector is obtained through fitting
Figure FDA00035746602200000414
And an optimum bias term
Figure FDA00035746602200000415
Then using the optimal weight vector
Figure FDA00035746602200000416
And an optimal bias term
Figure FDA00035746602200000417
Structural quality prediction model, as
Figure FDA00035746602200000418
Wherein the content of the first and second substances,
Figure FDA00035746602200000419
in functional representation, F is used to represent the global feature vector of the tone-mapped image, and as input vector to the quality prediction model,
Figure FDA00035746602200000420
is composed of
Figure FDA00035746602200000421
The transpose of (a) is performed,
Figure FDA00035746602200000422
is a linearity of FA function;
the test stage process comprises the following specific steps:
② for any tone mapping image I used as testtestObtaining I according to the same operation from the step I _2 to the step I _4testGlobal feature vector of (2), noted as Ftest(ii) a Then, according to the quality prediction model pair F constructed in the training stagetestTesting and predicting to obtain FtestCorresponding predicted value is taken as ItestThe objective quality prediction value of (1) is recorded as
Figure FDA00035746602200000423
Wherein, ItestHas a width W 'and a height H', FtestHas a dimension of 11 x 1,
Figure FDA00035746602200000424
is represented by FtestIs a linear function of (a).
2. The method for evaluating the quality of a tone-mapped image according to claim 1, wherein said step (r _ 2) is
Figure FDA0003574660220000051
And
Figure FDA0003574660220000052
the acquisition process comprises the following steps:
(ii) (-) 2 a)
Figure FDA0003574660220000053
R, G, B components in the RGB color space are correspondingly denoted as
Figure FDA0003574660220000054
Figure FDA0003574660220000055
Then calculate
Figure FDA0003574660220000056
Is recorded as a dark channel image
Figure FDA0003574660220000057
Will be provided with
Figure FDA0003574660220000058
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure FDA0003574660220000059
Figure FDA00035746602200000510
Wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, min () is a minimum value taking function, Cx,yRepresents a set of coordinate positions of all pixel points in a 3 × 3 neighborhood range centered on a pixel point whose coordinate position is (x, y), (x)1,y1) Is Cx,yIs determined at a position in any one of the coordinates,
Figure FDA00035746602200000511
represent
Figure FDA00035746602200000512
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure FDA00035746602200000513
to represent
Figure FDA00035746602200000514
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (a),
Figure FDA00035746602200000515
to represent
Figure FDA00035746602200000516
The middle coordinate position is (x)1,y1) The pixel value of the pixel point of (1);
(r-2 b) calculation
Figure FDA00035746602200000517
Distribution of gray histogram of (1), noted as { hk(j) J is more than or equal to 1 and less than or equal to 256 }; then { h } will bek(j) The coordinate of the node with the minimum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XminWill { h }k(j) The coordinate of the node with the maximum coordinate in all nodes with the non-zero histogram value in the [ 1 is not less than j is not more than 256 ] is recorded as XmaxWill be
Figure FDA00035746602200000518
The middle pixel value belongs to [ Xmin,Xmid]The set of pixel values of all the pixels in the range is recorded as omega1Will be
Figure FDA00035746602200000519
Middle pixel value belongs to (X)mid,Xmax]The set of pixel values of all the pixels in the range is recorded as omega2(ii) a Wherein j is a positive integer, j is more than or equal to 1 and less than or equal to 256, and hk(j) Represents { h }k(j) J is more than or equal to 1 and less than or equal to 256, the histogram value of the node with the coordinate of j,
Figure FDA00035746602200000520
(symbol)
Figure FDA00035746602200000521
is a rounding down operation symbol;
(r 2 c) by maximizing Ω 1Obtain a first threshold, denoted as X1 *
Figure FDA00035746602200000522
And by maximizing omega2Obtain a second threshold, denoted as X2 *
Figure FDA0003574660220000061
Wherein the content of the first and second substances,
Figure FDA0003574660220000062
express the finding such that
Figure FDA0003574660220000063
X when the value of (A) is maximum1Value of (A), X1Is omega1Of any one pixel value, Pf(X1) Represents omega1In (A) is [ X ]min,X1) Probability density function, mu, of all pixel values within a rangef(X1) Represents omega1In (A) is [ X ]min,X1) Mean, σ, of all pixel values in the rangef(X1) Represents omega1In (A) is [ X ]min,X1) Standard deviation, μ, of all pixel values within a rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]Mean, σ, of all pixel values in the rangeb(X1) Represents omega1In (A) is [ X ]1,Xmid]The standard deviation of all pixel values within the range,
Figure FDA0003574660220000064
express the finding such that
Figure FDA0003574660220000065
X when the value of (A) is maximum2Value of (A), X2Is omega2Of any one pixel value, Pf(X2) Represents omega2In (A) is [ X ]mid,X2) Probability density function, mu, of all pixel values within a rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Mean, σ, of all pixel values in the rangef(X2) Represents omega2In (A) is [ X ]mid,X2) Standard deviation, μ, of all pixel values within a rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Mean, σ, of all pixel values in the rangeb(X2) Represents omega2In (A) is [ X ]2,Xmax]Standard deviation of all pixel values within the range;
(ii) (-) 2 d)
Figure FDA0003574660220000066
Middle pixel value belongs to (X)2 *,Xmax]The area formed by all pixel points in the range is determined as a bright area
Figure FDA0003574660220000067
Will be provided with
Figure FDA0003574660220000068
The middle pixel value belongs to [ X ]min,X1 *) Determining the area formed by all pixel points in the range as a dark area
Figure FDA0003574660220000069
Will be provided with
Figure FDA00035746602200000610
The middle pixel value belongs to [ X1 *,X2 *]The area formed by all pixel points in the range is determined as a normal area
Figure FDA00035746602200000611
CN201910881340.2A 2019-09-18 2019-09-18 Tone mapping image quality evaluation method Active CN110717892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910881340.2A CN110717892B (en) 2019-09-18 2019-09-18 Tone mapping image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910881340.2A CN110717892B (en) 2019-09-18 2019-09-18 Tone mapping image quality evaluation method

Publications (2)

Publication Number Publication Date
CN110717892A CN110717892A (en) 2020-01-21
CN110717892B true CN110717892B (en) 2022-06-28

Family

ID=69209939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910881340.2A Active CN110717892B (en) 2019-09-18 2019-09-18 Tone mapping image quality evaluation method

Country Status (1)

Country Link
CN (1) CN110717892B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950596B (en) * 2021-03-09 2023-06-02 宁波大学 Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels
CN116630447B (en) * 2023-07-24 2023-10-20 成都海风锐智科技有限责任公司 Weather prediction method based on image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540048A (en) * 2009-04-21 2009-09-23 北京航空航天大学 Image quality evaluating method based on support vector machine
CN105741328A (en) * 2016-01-22 2016-07-06 西安电子科技大学 Shot image quality evaluation method based on visual perception
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
CN107105223A (en) * 2017-03-20 2017-08-29 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN107172418A (en) * 2017-06-08 2017-09-15 宁波大学 A kind of tone scale map image quality evaluating method analyzed based on exposure status
KR101846743B1 (en) * 2016-11-28 2018-04-09 연세대학교 산학협력단 Objective quality assessment method and apparatus for tone mapped images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10467735B2 (en) * 2015-08-25 2019-11-05 Interdigital Vc Holdings, Inc. Inverse tone mapping based on luminance zones
EP3319013A1 (en) * 2016-11-03 2018-05-09 Thomson Licensing Method and device for estimating cast shadow regions and/or highlight regions in images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540048A (en) * 2009-04-21 2009-09-23 北京航空航天大学 Image quality evaluating method based on support vector machine
CN105741328A (en) * 2016-01-22 2016-07-06 西安电子科技大学 Shot image quality evaluation method based on visual perception
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
KR101846743B1 (en) * 2016-11-28 2018-04-09 연세대학교 산학협력단 Objective quality assessment method and apparatus for tone mapped images
CN107105223A (en) * 2017-03-20 2017-08-29 宁波大学 A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN107172418A (en) * 2017-06-08 2017-09-15 宁波大学 A kind of tone scale map image quality evaluating method analyzed based on exposure status

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An Entropy-based Inverse Tone Mapping Operator for High Dynamic Range Applications;P. Mohammadi, M. T. Pourazad and P. Nasiopoulos;《2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS)》;20180402;全文 *
Blind quality index for tone-mapped images based on luminance partition;Chen P, Li L, Zhang X, et al;《Pattern Recognition》;20190108;第110-113页 *
Transform domain measure of enhancement — TDME — For security imaging applications;Samani, A. , K. Panetta , and S. Agaian;《IEEE International Conference on Technologies for Homeland Security IEEE, 2014》;20140106;全文 *
基于改进的最大类间方差法的水果图像识别研究;陈雪鑫,卜庆凯;《青海大学学报(工程技术版)》;20190531;第33-35页 *

Also Published As

Publication number Publication date
CN110717892A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN104902267B (en) No-reference image quality evaluation method based on gradient information
CN108074239B (en) No-reference image quality objective evaluation method based on prior perception quality characteristic diagram
CN107105223B (en) A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN106600597B (en) It is a kind of based on local binary patterns without reference color image quality evaluation method
CN105574901B (en) A kind of general non-reference picture quality appraisement method based on local contrast pattern
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
CN102547368B (en) Objective evaluation method for quality of stereo images
CN109218716B (en) No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN107146220B (en) A kind of universal non-reference picture quality appraisement method
CN106651829B (en) A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN110717892B (en) Tone mapping image quality evaluation method
CN112950596B (en) Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels
CN110120034B (en) Image quality evaluation method related to visual perception
CN110443800A (en) The evaluation method of video image quality
CN114598864A (en) Full-reference ultrahigh-definition video quality objective evaluation method based on deep learning
CN106023152B (en) It is a kind of without with reference to objective evaluation method for quality of stereo images
CN110910347A (en) Image segmentation-based tone mapping image no-reference quality evaluation method
CN106683079A (en) No-reference image objective quality evaluation method based on structural distortion
CN107292331B (en) Based on unsupervised feature learning without reference screen image quality evaluating method
CN114067006B (en) Screen content image quality evaluation method based on discrete cosine transform
Fu et al. Image quality assessment using edge and contrast similarity
Yan et al. Blind image quality assessment based on natural redundancy statistics
CN112950479B (en) Image gray level region stretching algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231226

Address after: 315000 North of First Floor, Fifth Avenue, No. 719 Zhongxing Road, Yinzhou District, Ningbo City, Zhejiang Province

Patentee after: Ningbo Frontier Digital Technology Co.,Ltd.

Address before: 200120 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: Shanghai ruishenglian Information Technology Co.,Ltd.

Effective date of registration: 20231226

Address after: 200120 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: Shanghai ruishenglian Information Technology Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University