CN108010024B - Blind reference tone mapping image quality evaluation method - Google Patents

Blind reference tone mapping image quality evaluation method Download PDF

Info

Publication number
CN108010024B
CN108010024B CN201711303968.1A CN201711303968A CN108010024B CN 108010024 B CN108010024 B CN 108010024B CN 201711303968 A CN201711303968 A CN 201711303968A CN 108010024 B CN108010024 B CN 108010024B
Authority
CN
China
Prior art keywords
image
tone mapping
brightness
quality evaluation
naturalness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711303968.1A
Other languages
Chinese (zh)
Other versions
CN108010024A (en
Inventor
蒋刚毅
宋昊
郁梅
彭宗举
陈芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201711303968.1A priority Critical patent/CN108010024B/en
Publication of CN108010024A publication Critical patent/CN108010024A/en
Application granted granted Critical
Publication of CN108010024B publication Critical patent/CN108010024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a blind reference tone mapping image objective quality evaluation method, which comprises the steps of firstly, extracting high brightness regions and low brightness regions of a tone mapping image, and measuring detail information distortion caused by tone mapping by combining detail information amount of the local regions and detail information amount of a global region; then, measuring the natural degree distortion of the brightness and the color by using the natural scene statistical characteristics of the tone mapping image brightness channel and the yellow channel; then, extracting brightness and color characteristics from the tone mapping image from the aesthetic point of view; and finally, the random forest is used for carrying out regression on all the characteristics and predicting the image quality, so that the objective quality evaluation of the tone mapping image of blind reference is realized, the evaluation effect is obviously improved, and the consistency with the visual perception of human eyes is better.

Description

Blind reference tone mapping image quality evaluation method
Technical Field
The invention relates to the technical field of image quality evaluation, in particular to a blind reference tone mapping image quality evaluation method.
Background
Generally speaking, the luminance dynamic range that the human visual system can accept under the same scene can reach 6 orders of magnitude, and the dynamic range that traditional low dynamic range image can express does not exceed 3 orders of magnitude, can't present the true feeling of natural scene to people's eyes, therefore, people have promoted the development of high dynamic range image. The high dynamic range image has a wider dynamic range than the low dynamic range image, the expressed levels are richer, the detail information of a high-brightness area and a low-brightness area in a scene can be well reserved, but the special high dynamic range display equipment is complex in development process, high in cost and difficult to popularize, and the low dynamic range conventional display equipment is wider in application. In order to display a high dynamic range image on a conventional display device, a tone mapping technology is required to map the high dynamic range image to a display range of a low dynamic range display, and meanwhile, information of an original high dynamic range image is retained as high as possible, otherwise, original abundant image brightness information is lost, so that the visual effect is poor, details are not obvious, and image information cannot be accurately acquired. In order to improve the tone mapping algorithm and the high dynamic range imaging technology, the tone mapping image quality evaluation has important research significance.
Most of the initial tone mapping image evaluation depends on human visual subjective evaluation, although the result of subjective test is closer to the perception of human eyes and can provide more accurate reference, the evaluation is time-consuming and labor-consuming, an expensive high dynamic range display and professional testing personnel are needed for repeated experiments, and artificial uncertain factors easily cause evaluation errors. Therefore, the objective tone mapping image quality evaluation method is more practical.
Traditional low dynamic range image objective quality evaluation methods have become mature, but they are not suitable for tone mapping image quality evaluation. In the conventional full-reference image quality evaluation method, the dynamic ranges of the reference image and the distorted image are consistent, the dynamic range difference between the high dynamic range image and the tone mapping image is too large, the effect is poor due to the full-reference evaluation method directly using the low dynamic range image, and a corresponding full-reference quality evaluation method needs to be designed for the tone mapping image. Wang et al combines the improved MS-SSIM algorithm with the statistical properties of natural scenes to provide a TMQI algorithm, which extracts features with small dynamic range differences and considers the naturalness of tone-mapped images, thus improving the effect compared with the conventional method, but it does not consider the color distortion of tone-mapped images. The dynamic ranges of three channels of RGB of the high dynamic range image are compressed respectively by Hossein et al, and then the image of each channel is evaluated by using an improved FSIM model, thereby providing an FSITM algorithm.
The full reference image evaluation method has the limitations that the difference between the dynamic ranges of the reference image and the distorted image is too large, the features in the same dynamic range are difficult to extract for similarity measurement, and the situation that the original high dynamic range image is difficult to obtain also exists. The blind reference quality evaluation method is less in limitation and higher in practicability, and is different from the traditional low dynamic range image distortion, the tone mapping image distortion of the blind reference quality evaluation method usually does not have the distortion of the types such as blurring and blocking effect, and the distortion mainly shows that the detail information is lost and the image is unnatural. Therefore, the distortion degree of the image cannot be accurately measured by directly using the conventional blind reference method. Based on this, Gu et al have proposed the BTMQI algorithm in consideration of the information amount, naturalness, and structure of the tone-mapped image, which defines the characteristics that the tone-mapped image should have based on the characteristics of the high dynamic range image, and achieves a good effect, but ignores the color distortion.
Based on the analysis, the blind reference tone mapping image quality evaluation method has more research value, the existing methods are too few and not comprehensive to consider, and the subjective consistency of the methods is still to be improved. Therefore, there is a need to develop a method for extracting features of distortion specific to tone-mapped images without a high dynamic range image, and evaluating the quality of the distortion more accurately.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a blind reference tone mapping image quality evaluation method, which can effectively improve the correlation between objective evaluation results and subjective perception and more accurately evaluate the quality of a blind reference tone mapping image.
The technical scheme adopted by the invention is that the blind reference tone mapping image quality evaluation method comprises the following steps:
(1) let ITMRepresenting a tone-mapped image to be evaluated, having a width W and a height H; let LTMIs represented byTMThe total number of pixels in the luminance component map of (1) is N — W × H;
(2) mixing LTMThe pixel values in (1) are arranged in descending order to obtain a column vector P with dimension Nx 1, the Mth of orientation PBThe value is used as the threshold value of the highlight region and is recorded as TB(ii) a Take the Mth of the vector PDValue doingThe threshold value is a low brightness region and is recorded as TD,MBAnd MDAll take n values, MB=[MB1,MB2,...,MBn],MD=[MD1,MD2,...,MDn];
(3) Mixing LTMIs greater than TBThe pixel value of the area is set to be 1, the pixel values of other areas are set to be 0, and a highlight area binary image of the image is obtained and is marked as RB,RB=[RB1,RB2,...,RBn](ii) a Mixing L withTMIs less than TDThe pixel value of the area is set to be 1, the pixel values of other areas are set to be 0, and a low-brightness area binary image of the image is obtained and is marked as RD,RD=[RD1,RD2,...,RDn];
(4) Using the structural element A to respectively correspond to the images RBAnd RDFirstly carrying out opening operation and then carrying out closing operation to obtain a connected highlight region binary image and a connected low-brightness region binary image which are respectively marked as R'BAnd R'D,R'B=[R'B1,R'B2,...,R'Bn],R'D=[R'D1,R'D2,...,R'Dn];
(5) Calculating LTMAs LTMGlobal detail information characteristic of (1), denoted as EG(ii) a Calculating R'BAnd R'DAs LTMRespectively marked as ELBAnd ELD,ELB=[ELB1,ELB2,...,ELBn],ELD=[ELD1,ELD2,...,ELDn]All detail information features are formed into LTMThe detail information feature vector of (1), noted as fdetails,fdetails=[EG,ELB,ELD]The dimension of which is 2n + 1;
(6) to LTMRemoving the mean value and normalizing the contrast to obtain MSCN coefficient which is recorded as DLUsing a generalized Gaussian distribution pair DLPerforming histogram fitting to obtain fitting parameters: mean value μLStandard deviation of
Figure BDA0001501520610000031
Kurtosis kLDeviation of sum sLAnd will muL
Figure BDA0001501520610000032
kLAnd sLAs a naturalness feature of the luminance channel;
(7) and extracting ITMYellow channel image of (1), noted as YTM
Figure BDA0001501520610000033
For YTMProcessing to obtain the naturalness characteristic of the yellow channel;
(8) and (4) sequentially forming I by the naturalness characteristic of the brightness channel obtained in the step (6) and the naturalness characteristic of the yellow channel obtained in the step (7)TMIs recorded as fnaturalness
(9) Will ITMConverting from RGB space to HSV color space, extracting its saturation component image and brightness component image, respectively recording as STMAnd VTMTo STMAnd VTMIs processed to obtain ITMIs characterised by the aesthetic characteristics of (1), noted faesthetic
(10) All the characteristics obtained in the above steps are sequentially formed into ITMIs denoted as F, and is denoted as F ═ Fdetails,fnaturalness,faesthetic];
(11) Taking F as input quantity, using random forest technique to calculate and obtain ITMObjective quality evaluation value of (1).
The invention has the beneficial effects that: firstly, the method of the invention considers that the detail information distortion of tone mapping is different from the traditional low dynamic range image, the tone mapping image with high quality can keep more detail information in the highlight area and the low highlight area, in addition, the m% (m is 10,20,30) areas with the lightest and darkest in the tone mapping image have consistency with the original high dynamic range image, therefore, the m% areas with the lightest brightness in the tone mapping image are respectively extracted as the highlight area, the m% areas with the darkest darkness are respectively extracted as the low highlight area, the information quantity of the areas is taken as the characteristic, and the detail information distortion specific to the tone mapping image can be more accurately reflected; secondly, considering that color distortion in the tone mapping process can also generate certain influence on the image quality, the method combines the brightness and the chroma of the tone mapping image, respectively extracts image features from two aspects of naturalness and aesthetics, finally fuses all the features, and calculates a final objective quality evaluation value by using a random forest method; thirdly, the method of the invention starts from the tone mapping distortion characteristic, and the extracted features can obtain good evaluation effect only by using a simpler aggregation strategy, thus having higher effectiveness and lower complexity.
Preferably, in step (7), Y isTMThe specific method for processing comprises the following steps: extraction of YTMThe matrix of standard deviations, denoted as σYExtracting σYMSCN coefficient of (D)YUsing a generalized Gaussian distribution pair DYPerforming histogram fitting to obtain fitting parameters: goodness gYG is mixingYAs a naturalness feature of the yellow channel, the yellow channel is a color channel that directly provides yellow light source information, and some features extracted from the channel can effectively capture color distortion that cannot be captured from the luminance channel.
As a priority, in step (8), S is selectedTMAnd VTMThe specific method for processing comprises the following steps: respectively dividing S by using a trisection ruleTMAnd VTMDivided into 3 × 3 non-overlapping sizes of
Figure BDA0001501520610000041
Then respectively calculate STMAnd VTMObtaining a saturation vector and a brightness vector which are respectively marked as S and V, wherein S is [ S [ [ S ]1,s2,...,si,...,s9],V=[v1,v2,...,vi,...,v9]Wherein s isiRepresenting the i-th image block in the saturation componentMean value, viThe average of the ith image block in the luminance component is represented, with S and V as aesthetic features. The tone mapping process can cause certain distortion to the brightness and the chroma of the image, the reasonable distribution of the brightness and the chroma can increase the aesthetic feeling of the image, and better visual perception is generated for human eyes. The human visual system is sensitive to brightness and saturation, which may reflect chrominance information of an image, and thus may be an important feature for aesthetic evaluation of images.
Drawings
FIG. 1 is a block diagram of a method described in example 1;
Detailed Description
The invention is further described below with reference to the accompanying drawings in combination with specific embodiments so that those skilled in the art can practice the invention with reference to the description, and the scope of the invention is not limited to the specific embodiments.
The invention relates to a blind reference tone mapping image quality evaluation method, which comprises the following steps:
(1) let ITMRepresenting a tone-mapped image to be evaluated, having a width W and a height H; let LTMIs represented byTMThe total number of pixels in the luminance component map of (1) is N — W × H;
(2) mixing LTMThe pixel values in (1) are arranged in descending order to obtain a column vector P with dimension Nx 1, the Mth of orientation PBThe value is used as the threshold value of the highlight region and is recorded as TB(ii) a Take the Mth of the vector PDThe value is used as the threshold value of the low light area and is recorded as TD,MBAnd MDAll take n values, MB=[MB1,MB2,...,MBn],MD=[MD1,MD2,...,MDn]Thus, TBAnd TDAlso all have n values, TB=[TB1,TB2,...,TBn],TD=[TD1,TD2,...,TDn];
(3) Mixing LTMIs greater than TBThe pixel value of the other region is set to 0, and the pixel value of the other region is set to 1, so that the pixel value of the other region is obtainedThe high brightness region of the image is a binary image, denoted as RB,RB=[RB1,RB2,...,RBn](ii) a Mixing L withTMIs less than TDThe pixel value of the area is set to be 1, the pixel values of other areas are set to be 0, and a low-brightness area binary image of the image is obtained and is marked as RD,RD=[RD1,RD2,...,RDn];
(4) Using the structural element A to respectively correspond to the images RBAnd RDFirstly carrying out opening operation and then carrying out closing operation to obtain a connected highlight region binary image and a connected low-brightness region binary image which are respectively marked as R'BAnd R'D,R'B=[R'B1,R'B2,...,R'Bn],R'D=[R'D1,R'D2,...,R'Dn];
(5) Calculating LTMAs LTMGlobal detail information characteristic of (1), denoted as EG(ii) a Calculating R'BAnd R'DAs LTMRespectively marked as ELBAnd ELD,ELB=[ELB1,ELB2,...,ELBn],ELD=[ELD1,ELD2,...,ELDn]All detail information features are formed into LTMThe detail information feature vector of (1), noted as fdetails,fdetails=[EG,ELB,ELD]The dimension of which is 2n + 1;
(6) to LTMRemoving the mean value and normalizing the contrast to obtain MSCN coefficient which is recorded as DLUsing a generalized Gaussian distribution pair DLPerforming histogram fitting to obtain fitting parameters: mean value μLStandard deviation of
Figure BDA0001501520610000051
Kurtosis kLDeviation of sum sLAnd will muL
Figure BDA0001501520610000052
kLAnd sLAs a brightnessNaturalness characteristics of the channel;
(7) and extracting ITMYellow channel image of (1), noted as YTM
Figure BDA0001501520610000053
For YTMBy treatment, i.e. extracting YTMThe matrix of standard deviations, denoted as σYExtracting σYMSCN coefficient of (D)YUsing a generalized Gaussian distribution pair DYPerforming histogram fitting to obtain fitting parameters: goodness gY,gYNaturalness as a yellow channel;
(8) and (4) sequentially forming I by the naturalness characteristic of the brightness channel obtained in the step (6) and the naturalness characteristic of the yellow channel obtained in the step (7)TMIs recorded as fnaturalness
Figure BDA0001501520610000054
The dimension is 5;
(9) will ITMConverting from RGB space to HSV color space, extracting its saturation component image and brightness component image, respectively recording as STMAnd VTMTo STMAnd VTMProcessing, namely respectively dividing S by using a trisection ruleTMAnd VTMDivided into 3 × 3 non-overlapping sizes of
Figure BDA0001501520610000061
Then respectively calculate STMAnd VTMObtaining a saturation vector and a brightness vector which are respectively marked as S and V, wherein S is [ S [ [ S ]1,s2,...,si,...,s9],V=[v1,v2,...,vi,...,v9]Wherein s isiRepresenting the mean, v, of the i-th image block in the saturation componentiRepresenting the mean value of the ith image block in the luminance component, and obtaining I by respectively taking S and V as aesthetic featuresTMIs characterised by the aesthetic characteristics of (1), noted faesthetic,faesthetic=[S,V]A dimension of 18;
(10) all the characteristics obtained in the above steps are sequentially formed into ITMIs denoted as F, and is denoted as F ═ Fdetails,fnaturalness,faesthetic];
(11) Taking F as input quantity, using random forest technique to calculate and obtain ITMObjective quality evaluation value of (1).
In the above step, the symbol
Figure BDA0001501520610000062
For rounding the operation symbol downwards, the symbol "[ alpha ], [ alpha ] is]"is a vector representing a symbol.
Example 1:
a blind reference tone mapping image quality evaluation method comprises the following steps:
(1) let ITMRepresenting a tone-mapped image to be evaluated, having a width W and a height H; let LTMIs represented byTMThe total number of pixels in the luminance component map of (1) is N — W × H;
(2) mixing LTMThe pixel values in (1) are arranged in descending order to obtain a column vector P with dimension Nx 1, the Mth of orientation PBThe value is used as the threshold value of the highlight region and is recorded as TB(ii) a Take the Mth of the vector PDThe value is used as the threshold value of the low light area and is recorded as TD,MBAnd MDAll take 3 values and take the value of,
Figure BDA0001501520610000063
thus, TBAnd TDAlso all have 3 values, TB=[TB1,TB2,TB3],TD=[TD1,TD2,TD3];
(3) Mixing LTMIs greater than TBThe pixel value of the area is set to be 1, the pixel values of other areas are set to be 0, and a highlight area binary image of the image is obtained and is marked as RB,RB=[RB1,RB2,RB3](ii) a Mixing L withTMIs less than TDThe pixel value of the other region is set to 0 and is set to 1Obtaining a low-brightness area binary image of the image, and recording the low-brightness area binary image as RD,RD=[RD1,RD2,RD3];
(4) Using a disk-shaped structure A with a radius of 3 pixel values to respectively correspond to the images RBAnd RDFirstly carrying out opening operation and then carrying out closing operation to obtain a connected highlight region binary image and a connected low-brightness region binary image which are respectively marked as R'BAnd R'D,R'B=[R'B1,R'B2,R'B3],R'D=[R'D1,R'D2,R'D3];
(5) Calculating LTMAs LTMGlobal detail information characteristic of (1), denoted as EG(ii) a Calculating R'BAnd R'DAs LTMRespectively marked as ELBAnd ELD,ELB=[ELB1,ELB2,ELB3],ELD=[ELD1,ELD2,ELD3]All detail information features are formed into LTMThe detail information feature vector of (1), noted as f1、f2、f3、f4、f5、f6、f7
(6) To LTMRemoving the mean value and normalizing the contrast to obtain MSCN coefficient which is recorded as DLUsing a generalized Gaussian distribution pair DLPerforming histogram fitting to obtain fitting parameters: mean value μLStandard deviation of
Figure BDA0001501520610000071
Kurtosis kLDeviation of sum sLAnd will muL
Figure BDA0001501520610000072
kLAnd sLThe naturalness characteristic of the luminance channel is denoted as f8、f9、f10、f11
(7) And extracting ITMYellow channel image of (1), noted as YTM
Figure BDA0001501520610000073
For YTMBy treatment, i.e. extracting YTMThe matrix of standard deviations, denoted as σYExtracting σYMSCN coefficient of (D)YUsing a generalized Gaussian distribution pair DYPerforming histogram fitting to obtain fitting parameters: goodness gY,gYThe naturalness characteristic of the yellow channel is denoted as f12
(8) Will ITMConverting from RGB space to HSV color space, extracting its saturation component image and brightness component image, respectively recording as STMAnd VTMTo STMAnd VTMProcessing, namely respectively dividing S by using a trisection ruleTMAnd VTMDivided into 3 × 3 non-overlapping sizes of
Figure BDA0001501520610000074
Then respectively calculate STMAnd VTMObtaining a saturation vector and a brightness vector which are respectively marked as S and V, wherein S is [ S [ [ S ]1,s2,...,si,...,s9],V=[v1,v2,...,vi,...,v9]Wherein s isiRepresenting the mean, v, of the i-th image block in the saturation componentiRepresenting the mean value of the ith image block in the luminance component, and obtaining I by respectively taking S and V as aesthetic featuresTMIs characterised by the aesthetic characteristics of (1), noted f13、f14、...、f30
(9) All the characteristics obtained in the above steps are sequentially formed into ITMIs denoted as F, and is denoted as F ═ F1,f2,...,f30]With a dimension of 30;
(10) taking F as input quantity, using random forest technique to calculate and obtain ITMObjective quality evaluation value of (1).
To further illustrate the feasibility and effectiveness of the method of the present invention, the following experiments were conducted.
In this embodiment, two public authoritative tone mapping image databases, TMID and ESPL-LIVEHDR, are selected for the experiment. The indicators of each image database are detailed in table 1, including the number of tone-mapping operators, the number of tone-mapped images, and the number of subjective testers. Where each database provides an average subjective score value for each tone-mapped image.
TABLE 1 authoritative tone mapping of indices of a database of images
Figure BDA0001501520610000081
Next, the correlation between the objective quality assessment value and the average subjective score value of each tone-mapped image obtained by the method of the present invention is analyzed. Here, according to the image quality evaluation model reference standard given by VQEG, three parameters, namely Pearson Linear Correlation Coefficient (PLCC) and Spearman rank correlation coefficient (SROCC), are selected to detect the performance of the evaluation method. The value ranges of PLCC, SROCC and KROCC are all [0,1], and the closer the value is to 1, the better the objective evaluation method is, otherwise, the worse the objective evaluation method is.
And (3) calculating all tone mapping images in the TMID and ESPL-LIVE HDR databases in the same way according to the processes from step (1) to step (10) of the method to obtain an objective quality evaluation value of each tone mapping image, then performing four-parameter Logistic function nonlinear fitting on an objective quality evaluation predicted value and a corresponding average subjective score difference value, and finally obtaining a performance index value between an objective evaluation result and subjective perception. In order to verify the effectiveness of the method, the method is compared and analyzed on two tone mapping image databases of TMID and ESPL-LIVE HDR compared with the existing image objective quality evaluation method with more advanced performance. The performance indicators on the TMID database are shown in Table 2, the performance indicators on the ESPL-LIVE HDR database are shown in Table 3, where TMQI, FSITM and NITI are full reference methods for tone mapped images, BLIINDS-II and BRISQLE are no reference methods for legacy images, and BTMQI and HIGRADE are no reference methods for tone mapped images. As seen from the data in the two tables, the method has the best performance on the two databases and has strong consistency with the subjective perception of human eyes.
TABLE 2 comparison of the Performance of the present invention method in TMID database with existing image objective quality evaluation methods
Figure BDA0001501520610000082
Figure BDA0001501520610000091
TABLE 3 comparison of the Performance of the present invention method on the ESPL-LIVE HDR database with the existing image objective quality evaluation method
Figure BDA0001501520610000092

Claims (3)

1. A blind reference tone mapping image quality evaluation method is characterized in that: the method comprises the following steps:
(1) let ITMRepresenting a tone-mapped image to be evaluated, having a width W and a height H; let LTMIs represented byTMThe total number of pixels in the luminance component map of (1) is N — W × H;
(2) mixing LTMThe pixel values in (1) are arranged in descending order to obtain a column vector P with dimension Nx 1, the Mth of orientation PBThe value is used as the threshold value of the highlight region and is recorded as TB(ii) a Take the Mth of the vector PDThe value is used as the threshold value of the low light area and is recorded as TD,MBAnd MDAll take n values, MB=[MB1,MB2,...,MBn],MD=[MD1,MD2,...,MDn];
(3) Mixing LTMIs greater than TBThe pixel value of the area is set to be 1, the pixel values of other areas are set to be 0, and a highlight area binary image of the image is obtained and is marked as RB,RB=[RB1,RB2,...,RBn](ii) a Mixing L withTMIs less than TDThe pixel value of the area is set to be 1, the pixel values of other areas are set to be 0, and a low-brightness area binary image of the image is obtained and is marked as RD,RD=[RD1,RD2,...,RDn];
(4) Using the structural element A to respectively correspond to the images RBAnd RDFirstly carrying out opening operation and then carrying out closing operation to obtain a connected highlight region binary image and a connected low-brightness region binary image which are respectively marked as R'BAnd R'D,R'B=[R'B1,R'B2,...,R'Bn],R'D=[R'D1,R'D2,...,R'Dn];
(5) Calculating LTMAs LTMGlobal detail information characteristic of (1), denoted as EG(ii) a Calculating R'BAnd R'DAs LTMRespectively marked as ELBAnd ELD,ELB=[ELB1,ELB2,...,ELBn],ELD=[ELD1,ELD2,...,ELDn]All detail information features are formed into LTMThe detail information feature vector of (1), noted as fdetails,fdetails=[EG,ELB,ELD]The dimension of which is 2n + 1;
(6) to LTMRemoving the mean value and normalizing the contrast to obtain MSCN coefficient which is recorded as DLUsing a generalized Gaussian distribution pair DLPerforming histogram fitting to obtain fitting parameters: mean value μLStandard deviation of
Figure FDA0003274459880000011
Kurtosis kLDeviation of sum sLAnd will muL
Figure FDA0003274459880000012
kLAnd sLAs a naturalness feature of the luminance channel;
(7) and extracting ITMYellow channel image of (1), noted as YTM
Figure FDA0003274459880000013
For YTMProcessing to obtain the naturalness characteristic of the yellow channel;
(8) and (4) sequentially forming I by the naturalness characteristic of the brightness channel obtained in the step (6) and the naturalness characteristic of the yellow channel obtained in the step (7)TMIs recorded as fnaturalness
(9) Will ITMConverting from RGB space to HSV color space, extracting its saturation component image and brightness component image, respectively recording as STMAnd VTMTo STMAnd VTMIs processed to obtain ITMIs characterised by the aesthetic characteristics of (1), noted faesthetic
(10) All the characteristics obtained in the above steps are sequentially formed into ITMIs denoted as F, and is denoted as F ═ Fdetails,fnaturalness,faesthetic];
(11) Taking F as input quantity, using random forest technique to calculate and obtain ITMObjective quality evaluation value of (1).
2. The blind reference tone mapping image quality evaluation method according to claim 1, characterized in that: in step (7), for YTMThe specific method for processing comprises the following steps: extraction of YTMThe matrix of standard deviations, denoted as σYExtracting σYMSCN coefficient of (D)YUsing a generalized Gaussian distribution pair DYPerforming histogram fitting to obtain fitting parameters: goodness gYG is mixingYAs a naturalness feature of the yellow channel.
3. The blind reference tone mapping image quality evaluation method according to claim 1, characterized in that: in step (9), for STMAnd VTMThe specific method for processing comprises the following steps: respectively dividing S by using a trisection ruleTMAnd VTMDivided into 3 × 3 non-overlapping sizes of
Figure FDA0003274459880000021
Then respectively calculate STMAnd VTMObtaining a saturation vector and a brightness vector which are respectively marked as S and V, wherein S is [ S [ [ S ]1,s2,...,si,...,s9],V=[v1,v2,...,vi,...,v9]Wherein s isiRepresenting the mean, v, of the i-th image block in the saturation componentiThe average of the ith image block in the luminance component is represented, and S and V are taken as aesthetic features respectively.
CN201711303968.1A 2017-12-11 2017-12-11 Blind reference tone mapping image quality evaluation method Active CN108010024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711303968.1A CN108010024B (en) 2017-12-11 2017-12-11 Blind reference tone mapping image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711303968.1A CN108010024B (en) 2017-12-11 2017-12-11 Blind reference tone mapping image quality evaluation method

Publications (2)

Publication Number Publication Date
CN108010024A CN108010024A (en) 2018-05-08
CN108010024B true CN108010024B (en) 2021-12-07

Family

ID=62057957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711303968.1A Active CN108010024B (en) 2017-12-11 2017-12-11 Blind reference tone mapping image quality evaluation method

Country Status (1)

Country Link
CN (1) CN108010024B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191460B (en) * 2018-10-15 2021-10-26 方玉明 Quality evaluation method for tone mapping image
CN109218716B (en) * 2018-10-22 2020-11-06 天津大学 No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN110706196B (en) * 2018-11-12 2022-09-30 浙江工商职业技术学院 Clustering perception-based no-reference tone mapping image quality evaluation algorithm
CN109871852B (en) * 2019-01-05 2023-05-26 天津大学 No-reference tone mapping image quality evaluation method
CN109903247B (en) * 2019-02-22 2023-02-03 西安工程大学 High-precision graying method for color image based on Gaussian color space correlation
CN110827237B (en) * 2019-09-27 2022-10-04 浙江工商职业技术学院 Image quality evaluation method based on antagonistic color space semi-reference tone mapping
CN110910346A (en) * 2019-10-17 2020-03-24 浙江工商职业技术学院 Tone mapping image quality evaluation method based on dense scale invariant feature transformation
CN110910347B (en) * 2019-10-18 2023-06-06 宁波大学 Tone mapping image non-reference quality evaluation method based on image segmentation
CN110827241B (en) * 2019-10-21 2022-08-12 国家广播电视总局广播电视规划院 Low-brightness enhanced picture full-reference method based on color distortion and contrast enhancement
CN110796595B (en) * 2019-10-31 2022-03-01 北京大学深圳研究生院 Tone mapping method and device and electronic equipment
CN110853027A (en) * 2019-11-18 2020-02-28 方玉明 Three-dimensional synthetic image no-reference quality evaluation method based on local variation and global variation
CN116152249B (en) * 2023-04-20 2023-07-07 济宁立德印务有限公司 Intelligent digital printing quality detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834898A (en) * 2015-04-09 2015-08-12 华南理工大学 Quality classification method for portrait photography image
CN105491371A (en) * 2015-11-19 2016-04-13 国家新闻出版广电总局广播科学研究院 Tone mapping image quality evaluation method based on gradient magnitude similarity

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447884B (en) * 2015-12-21 2017-11-24 宁波大学 A kind of method for objectively evaluating image quality based on manifold characteristic similarity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834898A (en) * 2015-04-09 2015-08-12 华南理工大学 Quality classification method for portrait photography image
CN105491371A (en) * 2015-11-19 2016-04-13 国家新闻出版广电总局广播科学研究院 Tone mapping image quality evaluation method based on gradient magnitude similarity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
New No-Reference Stereo Image Quality Method for;Wang Ying 等;《IEEE Xplore》;20161229;158-162 *

Also Published As

Publication number Publication date
CN108010024A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108010024B (en) Blind reference tone mapping image quality evaluation method
CN110046673B (en) No-reference tone mapping image quality evaluation method based on multi-feature fusion
Choi et al. Referenceless prediction of perceptual fog density and perceptual image defogging
CN107172418B (en) A kind of tone scale map image quality evaluating method based on exposure status analysis
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN109389591B (en) Color descriptor-based color image quality evaluation method
CN107657619B (en) A kind of low-light (level) Forest fire image dividing method
CN109919959B (en) Tone mapping image quality evaluation method based on color, naturalness and structure
Zhang et al. A no-reference evaluation metric for low-light image enhancement
CN109218716B (en) No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN104036493B (en) No-reference image quality evaluation method based on multifractal spectrum
El Khoury et al. Color and sharpness assessment of single image dehazing
CN107146220B (en) A kind of universal non-reference picture quality appraisement method
CN109829905A (en) It is a kind of face beautification perceived quality without reference evaluation method
CN111260645B (en) Tampered image detection method and system based on block classification deep learning
CN110706196B (en) Clustering perception-based no-reference tone mapping image quality evaluation algorithm
CN108010023B (en) High dynamic range image quality evaluation method based on tensor domain curvature analysis
CN109754390A (en) A kind of non-reference picture quality appraisement method based on mixing visual signature
CN110910347B (en) Tone mapping image non-reference quality evaluation method based on image segmentation
CN111489333B (en) No-reference night natural image quality evaluation method
Yang et al. EHNQ: Subjective and objective quality evaluation of enhanced night-time images
Kumar et al. Haze elimination model-based color saturation adjustment with contrast correction
CN110415816B (en) Skin disease clinical image multi-classification method based on transfer learning
CN112651945A (en) Multi-feature-based multi-exposure image perception quality evaluation method
CN110251076B (en) Method and device for detecting significance based on contrast and fusing visual attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant