CN106886992A - A kind of quality evaluating method of many exposure fused images of the colour based on saturation degree - Google Patents
A kind of quality evaluating method of many exposure fused images of the colour based on saturation degree Download PDFInfo
- Publication number
- CN106886992A CN106886992A CN201710052878.3A CN201710052878A CN106886992A CN 106886992 A CN106886992 A CN 106886992A CN 201710052878 A CN201710052878 A CN 201710052878A CN 106886992 A CN106886992 A CN 106886992A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- similarity
- information
- exposure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000004927 fusion Effects 0.000 claims abstract description 99
- 238000011156 evaluation Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 10
- 238000010801 machine learning Methods 0.000 claims abstract description 9
- 238000013441 quality evaluation Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 239000000284 extract Substances 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 4
- 238000001303 quality assessment method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000004321 preservation Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于饱和度的彩色多曝光融合图像的质量评价方法,能够解决了彩色多曝光融合图像评价不准确的难题;以MEF数据库中的多曝光图像及其融合图像作为训练样本,对其分别采用基于饱和度和小波系数的提取方式,获得纹理信息、结构信息和彩色信息;分别计算纹理相似度、结构相似度以及彩色相似度;将获得的所有相似度作为特征值,结合给定的MOS值,输入到ELM机器学习机中进行训练;将多曝光融合图像及对应生成的融合后图像分别获得的纹理相似度、结构相似度以及彩色相似度输入到训练好的ELM机器学习机,获得评价结果。
The invention discloses a method for evaluating the quality of color multi-exposure fusion images based on saturation, which can solve the problem of inaccurate evaluation of color multi-exposure fusion images; using multi-exposure images and their fusion images in the MEF database as training samples, The extraction methods based on saturation and wavelet coefficients are respectively used to obtain texture information, structure information and color information; the texture similarity, structure similarity and color similarity are calculated respectively; all the obtained similarities are used as feature values, combined with the given The fixed MOS value is input into the ELM machine learning machine for training; the texture similarity, structural similarity and color similarity obtained respectively from the multi-exposure fusion image and the corresponding generated fused image are input into the trained ELM machine learning machine , to get the evaluation result.
Description
技术领域technical field
本发明属于图像质量评价方法领域,具体涉及一种基于饱和度的彩色多曝光融合图像的质量评价方法。The invention belongs to the field of image quality evaluation methods, in particular to a saturation-based color multi-exposure fusion image quality evaluation method.
背景技术Background technique
当前普通的显示设备可以显示的亮度范围远远小于真实场景和人眼所能感知的亮度范围,所以不可能在普通设备上显示人眼观察到的真实场景。而高动态范围(HighDynamic Range,HDR)图像显示技术可以实现将真实场景的亮暗信息尽可能多的在普通设备上显示出来,并且尽可能符合人眼真实的感知效果。随着高清数字产业的快速发展,HDR显示技术已经成为当前数字领域的研究热点之一。多曝光图像融合技术就是一种简便的HDR显示技术,它可以由多幅不同曝光度的图像序列直接融合成一幅能在普通显示设备上显示的图像。随着多曝光融合技术的发展,越来越多的融合算法被提出,然而不同的融合算法,融合效果不同。为了评价不同的融合算法的优劣,急需一种能够评价多曝光融合效果的评价算法。The current ordinary display devices can display a brightness range that is far smaller than the real scene and the brightness range that the human eye can perceive, so it is impossible to display the real scene observed by the human eye on an ordinary device. The high dynamic range (High Dynamic Range, HDR) image display technology can display as much light and dark information of the real scene on common devices as possible, and conform to the real perception effect of the human eye as much as possible. With the rapid development of the high-definition digital industry, HDR display technology has become one of the current research hotspots in the digital field. Multi-exposure image fusion technology is a simple HDR display technology, which can directly fuse multiple image sequences with different exposures into an image that can be displayed on a common display device. With the development of multi-exposure fusion technology, more and more fusion algorithms have been proposed. However, different fusion algorithms have different fusion effects. In order to evaluate the advantages and disadvantages of different fusion algorithms, an evaluation algorithm that can evaluate the effect of multi-exposure fusion is urgently needed.
现在面向融合图像的质量评价算法主要分四类:1)基于图像信息的方法,这类方法主要利用融合前后图像的互信息来评价融合后图像的质量,这类方法只考虑了图像的整体信息量,忽略了单个像素以及局部的图像结构等信息;2)基于图像特征的方法,这类方法主要利用空域边缘特征和小波域中边缘特征来对融合图像进行质量评价,这类方法忽略了图像的纹理信息和人眼感知特性;3)基于结构相似的方法,这类方法的灵感源于SSIM算法,该方法计算融合前图像与融合后图像的结构相似度,与人眼视觉相近,但是现在多数的基于结构相似度的评价算法都是基于灰度图像进行评价的,忽略了图像的彩色信息,现实多曝光的融合图像都是彩色图像,融合后彩色信息有所变化,所以已有评价算法并不适用于彩色图像;4)基于人眼感知的方法,这类方法主要利用图像的显著信息来对图像进行评价,但往往忽略了图像的背景信息,造成了信息的丢失。At present, the quality evaluation algorithms for fusion images are mainly divided into four categories: 1) methods based on image information, this type of method mainly uses the mutual information of images before and after fusion to evaluate the quality of the image after fusion, this type of method only considers the overall information of the image 2) Image feature-based methods, which mainly use edge features in the spatial domain and edge features in the wavelet domain to evaluate the quality of the fused image, which ignore the image 3) Based on the structure similarity method, this type of method is inspired by the SSIM algorithm, which calculates the structural similarity between the image before fusion and the image after fusion, which is similar to human vision, but now Most of the evaluation algorithms based on structural similarity are based on grayscale images, ignoring the color information of the image. In reality, the multi-exposure fusion images are all color images, and the color information changes after fusion, so there are existing evaluation algorithms. It is not suitable for color images; 4) Based on the method of human eye perception, this type of method mainly uses the salient information of the image to evaluate the image, but often ignores the background information of the image, resulting in the loss of information.
发明内容Contents of the invention
有鉴于此,本发明提供了一种基于饱和度的彩色多曝光融合图像的质量评价方法,提高了彩色多曝光融合图像评价准确度。In view of this, the present invention provides a color multi-exposure fusion image quality evaluation method based on saturation, which improves the evaluation accuracy of the color multi-exposure fusion image.
实施本方案的具体方法为:The specific method of implementing this program is:
步骤一、以多曝光图像融合数据库MEF中的多曝光图像及其融合图像作为训练样本,对多曝光图像和融合图像采用基于饱和度和小波系数的提取方式,获得纹理信息、结构信息和彩色信息;根据融合前后图像的纹理信息、结构信息以及彩色信息,分别计算纹理相似度、结构相似度以及彩色相似度;将纹理信息相似度、结构信息相似度和彩色信息相似度作为特征值,结合给定的评价分值,输入到极限学习机器ELM中进行训练;Step 1. Using the multi-exposure image and its fusion image in the multi-exposure image fusion database MEF as training samples, use the extraction method based on saturation and wavelet coefficients for the multi-exposure image and fusion image to obtain texture information, structure information and color information ; According to the texture information, structure information and color information of the images before and after fusion, the texture similarity, structure similarity and color similarity are calculated respectively; the texture information similarity, structure information similarity and color information similarity are used as eigenvalues, combined to give The given evaluation score is input to the extreme learning machine ELM for training;
所述纹理信息的提取方式为:设待提取信息的融合后图像或多曝光图像中的各图像为图像IQ;对图像IQ进行小波变换,获得图像IQ分为低频部分、中频部分及高频部分的小波系数集合Iq=[LL LH HL HH];其中,LL为低频部分的小波系数,LH和HL为中频部分的小波系数,LH对应水平方向相位,HL对应竖直方向相位,HH为高频部分的小波系数;基于图像的纹理信息多数集中在中频及高频部分的原理,提取图像IQ的中频及高频部分的小波系数集合Iq′=[LH HL HH];对于图像IQ为融合后图像的情况,提取的小波系数集合Iq′即为纹理信息;对于图像IQ为多曝光图像的情况,选取多曝光图像中各图像对应的小波系数集合Iq′并取各系数的最大值,组成Vmax=[max|LH|,max|HL|,max|HH|],作为多曝光图像的纹理信息;The method of extracting the texture information is as follows: suppose that each image in the fused image or the multi-exposure image of the information to be extracted is an image IQ ; the image IQ is subjected to wavelet transformation, and the obtained image IQ is divided into a low-frequency part, an intermediate-frequency part and a low-frequency part. The wavelet coefficient set I q of the high frequency part=[LL LH HL HH]; Wherein, LL is the wavelet coefficient of the low frequency part, LH and HL are the wavelet coefficients of the intermediate frequency part, LH corresponds to the horizontal direction phase, and HL corresponds to the vertical direction phase, HH is the wavelet coefficient of the high frequency part; based on the principle that most of the texture information of the image is concentrated in the intermediate frequency and high frequency part, the wavelet coefficient set I q '=[LH HL HH] of the intermediate frequency and high frequency part of the image I Q is extracted; for When the image I Q is a fused image, the extracted wavelet coefficient set I q ′ is the texture information; for the image I Q is a multi-exposure image, select the wavelet coefficient set I q ′ corresponding to each image in the multi-exposure image and Take the maximum value of each coefficient to form Vmax=[max|LH|, max|HL|, max|HH|] as the texture information of the multi-exposure image;
所述彩色信息的提取方式为:计算图像IQ的饱和度SA,其中,R,G,B是彩色图像中红、绿、蓝三种色彩信息,μ是三种色彩信息的平均值;对于图像IQ为融合后图像的情况,将饱和度值SA作为彩色信息;对于图像IQ为多曝光图像的情况,选取多曝光图像中各图像对应的饱和度值SA的最大值,作为多曝光图像的彩色信息;The extraction method of the color information is: calculating the saturation SA of the image I Q , Among them, R, G, B are three kinds of color information of red, green and blue in the color image, and μ is the average value of the three kinds of color information; for the case where the image I Q is a fused image, the saturation value SA is used as the color information ; For the situation that image I Q is a multi-exposure image, select the maximum value of the saturation value SA corresponding to each image in the multi-exposure image, as the color information of the multi-exposure image;
步骤二、对多曝光融合图像进行质量评价时,采用融合算法对多曝光图像进行融合,生成待评价的融合后图像;Step 2. When evaluating the quality of the multi-exposure fused image, a fusion algorithm is used to fuse the multi-exposure images to generate a fused image to be evaluated;
步骤三、对待评价的融合后图像以及该融合后图像对应的融合前多曝光图像,采用所述基于饱和度和小波系数的提取方式,分别提取纹理信息、结构信息和彩色信息;Step 3, the fused image to be evaluated and the pre-fused multi-exposure image corresponding to the fused image, using the extraction method based on saturation and wavelet coefficients, respectively extract texture information, structure information and color information;
步骤四、根据步骤三获得的融合前后图像的纹理信息、结构信息以及彩色信息,分别计算纹理相似度、结构相似度以及彩色相似度;Step 4. According to the texture information, structure information and color information of the images before and after fusion obtained in step 3, calculate texture similarity, structure similarity and color similarity respectively;
步骤五、将步骤四获得的三个相似度作为特征值输入到训练好的ELM机器学习机,获得待评价的融合后图像的评价结果。Step 5. Input the three similarities obtained in step 4 as feature values into the trained ELM machine learning machine to obtain the evaluation result of the fused image to be evaluated.
优选地,计算融合前后图像的纹理相似度、结构相似度以及彩色相似度的具体方法为:Preferably, the specific methods for calculating texture similarity, structural similarity and color similarity of images before and after fusion are:
相似度定义如下:其中,I1和I2是融合前后图像的参数,C是常数;当I1和I2分别为融合前后图像的纹理信息,则将获得的融合前后图像的纹理信息代入到相似度s公式中,获得纹理相似度TS;当I1和I2分别为融合前后图像的饱和度,将获得的饱和度代入到相似度s公式中,获得饱和度相似度SAS;利用结构相似度公式获得结构相似度SS,其中,σxy是融合前图像结构和融合后图像结构的协方差,σx是融合前图像结构标准差,σy是融合后图像结构标准差。Similarity is defined as follows: Among them, I 1 and I 2 are the parameters of the images before and after fusion, and C is a constant; when I 1 and I 2 are the texture information of the images before and after fusion respectively, then the obtained texture information of the images before and after fusion is substituted into the similarity s formula , to obtain the texture similarity TS; when I 1 and I 2 are the saturation of the image before and after fusion, the obtained saturation is substituted into the similarity s formula to obtain the saturation similarity SAS; using the structural similarity formula Obtain the structural similarity SS, where σ xy is the covariance of the image structure before fusion and the image structure after fusion, σ x is the standard deviation of the image structure before fusion, and σ y is the standard deviation of the image structure after fusion.
优选地,极限学习机器ELM采用激活函数radial basis function对输入的图像特征进行训练,其中,隐藏层的节点数是设置为21。Preferably, the extreme learning machine ELM uses an activation function radial basis function to train the input image features, wherein the number of nodes in the hidden layer is set to 21.
优选地,融合后图像结构信息的提取方法为:提取融合后图像采用结构相似性SSIM算法。Preferably, the method for extracting the structural information of the fused image is: extracting the fused image using the structural similarity SSIM algorithm.
优选地,相似度定义公式S中的常数C=0.001。Preferably, the similarity defines a constant C=0.001 in the formula S.
优选地,结构相似度公式SS中的常数C2=0.001。Preferably, the constant C 2 in the structural similarity formula SS is 0.001.
有益效果:Beneficial effect:
本方法在综合考虑了单个像素以及局部的图像结构信息、图像的纹理信息、人眼感知特性、图像的彩色信息和图像的背景信息的基础上,解决了彩色多曝光融合图像评价不准确的难题,具体为:This method solves the problem of inaccurate evaluation of color multi-exposure fusion images on the basis of comprehensive consideration of single pixel and local image structure information, image texture information, human perception characteristics, image color information and image background information. ,Specifically:
1、该方法计算融合前后图像的饱和度相似度,并将其作为衡量融合图像的彩色信息的手段,能很好的评价出融合前后图像彩色信息的变化情况,弥补了融合算法评价中彩色信息质量评价的空缺。1. This method calculates the saturation similarity of the image before and after fusion, and uses it as a means to measure the color information of the fusion image. It can well evaluate the change of the color information of the image before and after fusion, and makes up for the color information in the evaluation of the fusion algorithm. Vacancies for quality reviews.
2、由于图像在多曝光过程中,受不同曝光程度的影响,纹理保存各不相同。本方法利用小波变换提取融合前后图像的纹理信息,在考虑了图像的背景的基础上,能够很好的评估融合算法在融合过程中纹理信息的保存程度。2. Because the image is affected by different exposure levels during the multi-exposure process, the texture preservation is different. This method uses wavelet transform to extract the texture information of images before and after fusion, and can evaluate the degree of preservation of texture information in the fusion process of the fusion algorithm on the basis of considering the background of the image.
3、本方法使用ELM机器学习机,该方法既能清楚的描述出纹理相似度、结构相似度以及饱和度相似度之间的关系,又能快速准确的得出预算结果,在工程应用上具有很大的提升空间。3. This method uses the ELM machine learning machine. This method can not only clearly describe the relationship between texture similarity, structure similarity and saturation similarity, but also quickly and accurately obtain budget results. It has great potential in engineering applications. A lot of room for improvement.
4、本方法在考虑图像的彩色信息和单个像素以及局部图像结构的基础上,利用SSIM算法,计算融合前图像与融合后图像的结构相似度,与人眼视觉相近,能够很好的评估融合算法在融合过程中彩色信息的保存程度。4. On the basis of considering the color information of the image, single pixel and local image structure, this method uses the SSIM algorithm to calculate the structural similarity between the image before fusion and the image after fusion, which is similar to human vision, and can evaluate the fusion very well The degree of preservation of color information during the fusion process of the algorithm.
附图说明Description of drawings
图1为本发明实施的总体流程图。Fig. 1 is the overall flowchart of the implementation of the present invention.
图2为本发明纹理相似度的生成流程图。Fig. 2 is a flowchart for generating texture similarity in the present invention.
具体实施方式detailed description
下面结合附图并举实施例,对本发明进行详细描述。The present invention will be described in detail below with reference to the accompanying drawings and examples.
本发明提供了一种基于饱和度的彩色多曝光融合图像的质量评价方法,具体流程如图1所示。The present invention provides a color multi-exposure fusion image quality evaluation method based on saturation, and the specific process is shown in FIG. 1 .
步骤一、以MEF(多曝光图像融合数据库)中的多曝光图像及其融合图像作为训练样本,对多曝光图像和融合图像采用基于饱和度和小波系数的提取方式,获得纹理信息、结构信息和彩色信息;根据融合前后图像的纹理信息、结构信息以及彩色信息,分别计算纹理相似度、结构相似度以及彩色相似度;将纹理信息相似度、结构信息相似度和彩色信息相似度作为特征值,结合给定的评价分值即平均主观分MOS值,输入到极限学习机器ELM中进行训练;Step 1. Using the multi-exposure images and their fusion images in MEF (Multi-exposure Image Fusion Database) as training samples, the multi-exposure images and fusion images are extracted based on saturation and wavelet coefficients to obtain texture information, structure information and Color information; according to the texture information, structure information and color information of the images before and after fusion, the texture similarity, structure similarity and color similarity are calculated respectively; the texture information similarity, structure information similarity and color information similarity are used as feature values, Combined with the given evaluation score, that is, the average subjective score MOS value, input it into the extreme learning machine ELM for training;
其中,极限学习机器ELM中输入的训练数据是由MEF数据库(该数据库来源于K.Ma,K.Zeng and Z.Wang,"Perceptual Quality Assessment for Multi-Exposure ImageFusion,"in IEEE Transactions on Image Processing,vol.24,no.11,pp.3345-3356,Nov.2015)所提供的,测试数据是上步所获得相似度信息,经过ELM机器学习算法学习过后,可以获得相应的评价结果;ELM是由论文(G.-B.Huang,Q.-Y.Zhu,and C.-K.Siew,“Extremelearning machine:theory and applications,”Neurocomputing,vol.70,pp.489-501,2006)提出的一种快速的机器学习的方法,算法中常见的激活函数有两种Sigmoidfunction和radial basis function(RBF),我们采取前一种激活函数对输入的特征进行训练,得出我们想要的结果。其中隐藏层的节点数是设置为21;极限学习机器ELM训练处的测试结果即为评价结果。Among them, the training data input in the extreme learning machine ELM is the MEF database (this database comes from K.Ma, K.Zeng and Z.Wang, "Perceptual Quality Assessment for Multi-Exposure ImageFusion," in IEEE Transactions on Image Processing, vol.24, no.11, pp.3345-3356, Nov.2015), the test data is the similarity information obtained in the previous step, and after learning the ELM machine learning algorithm, the corresponding evaluation results can be obtained; ELM is Proposed by the paper (G.-B.Huang, Q.-Y.Zhu, and C.-K.Siew, "Extremelearning machine: theory and applications," Neurocomputing, vol.70, pp.489-501, 2006) A fast machine learning method. There are two common activation functions in the algorithm: Sigmoid function and radial basis function (RBF). We use the former activation function to train the input features and get the results we want. The number of nodes in the hidden layer is set to 21; the test result of the extreme learning machine ELM training is the evaluation result.
所述纹理信息的提取方式为:设待提取信息的融合后图像或多曝光图像中的各图像为图像IQ;对图像IQ进行小波变换,获得图像IQ分为低频部分、中频部分及高频部分的小波系数集合Iq=[LL LH HL HH];,其中,LL为低频部分的小波系数,LH和HL为中频部分的小波系数,LH对应水平方向相位,HL对应竖直方向相位,HH为高频部分的小波系数;基于图像的纹理信息多数集中在中频及高频部分的原理,提取图像IQ的中频及高频部分的小波系数集合Iq′=[LH HL HH];对于图像IQ为融合后图像的情况,提取的小波系数集合Iq′即为纹理信息;对于图像IQ为多曝光图像的情况,选取多曝光图像中各图像对应的小波系数集合Iq′并取各系数的最大值,组成Vmax=[max|LH|,max|HL|,max|HH|],作为多曝光图像的纹理信息;The method of extracting the texture information is as follows: suppose that each image in the fused image or the multi-exposure image of the information to be extracted is an image IQ ; the image IQ is subjected to wavelet transformation, and the obtained image IQ is divided into a low-frequency part, an intermediate-frequency part and a low-frequency part. The wavelet coefficient set I q of the high frequency part =[LL LH HL HH];, wherein, LL is the wavelet coefficient of the low frequency part, LH and HL are the wavelet coefficients of the intermediate frequency part, LH corresponds to the phase in the horizontal direction, and HL corresponds to the phase in the vertical direction , HH is the wavelet coefficient of the high frequency part; based on the principle that most of the texture information of the image is concentrated in the intermediate frequency and high frequency part, the wavelet coefficient set I q '=[LH HL HH] of the intermediate frequency and high frequency part of the image I Q is extracted; For the case where the image I Q is a fused image, the extracted wavelet coefficient set I q ′ is the texture information; for the case where the image I Q is a multi-exposure image, select the wavelet coefficient set I q ′ corresponding to each image in the multi-exposure image And take the maximum value of each coefficient to form Vmax=[max|LH|, max|HL|, max|HH|], as the texture information of the multi-exposure image;
本发明为了尽可能多的提取纹理信息,对融合前后的图像进行了三级小波分解,每级小波分解获得对应的纹理相似度值分别为TS1,TS2,TS3。In order to extract as much texture information as possible, the present invention performs three-level wavelet decomposition on images before and after fusion, and the corresponding texture similarity values obtained by each level of wavelet decomposition are TS 1 , TS 2 , and TS 3 .
所述结构信息的提取方式为:求图像的局部区域的标准差,可以表征图像结构;对于图像IQ为融合后图像的情况,直接求取图像局部区域的标准差;对于图像IQ为多曝光图像的情况,由于有多张多曝光图像,我们利用多曝光图像的提取结构信息方法,将多张融合前多曝光图像提取出一个结构信息,利用SSIM(结构相似性)算法,将多个融合后的图像对应提取出多个结构信息;本算法引用论文(K.Ma,K.Zeng and Z.Wang,"Perceptual QualityAssessment for Multi-Exposure Image Fusion,"in IEEE Transactions on ImageProcessing,vol.24,no.11,pp.3345-3356,Nov.2015)中的结构融合方法来提取多曝光图像的结构图,结构图可以表示为 是待求的结构图,是归一化前的图像结构图,||·||表示模值;图像结构图其中表示第k幅图像的权重值,sk表示第k幅图像的像素值,K表示多曝光图像数量,其原理是对多曝光多幅图像sk进行加权处理;权重值其中为多曝光第k幅图像像素值,p值代表局部区域的结构强度;强度其中R值代表结构的连续性;结构连续性其中为多曝光第k幅图像像素值。按上述方法即可提取多曝光图像的结构图。获取多曝光图像的结构图后,求取多曝光图像的结构图的局部标准差,作为多曝光图像的结构信息;对于所得的融合前图像的结构图以及融合后图像,我们直接提取局部区域的标准差,以及协方差,局部区域大小为11×11。融合后图像结构的提取方法为:提取融合后图像采用SSIM算法。The extraction mode of described structure information is: ask the standard deviation of the local area of image, can characterize image structure; For the situation that image I Q is the image after fusion, directly ask for the standard deviation of image local area; For image I Q is multiple In the case of exposed images, since there are multiple multi-exposure images, we use the method of extracting structural information of multi-exposure images to extract a structural information from multiple multi-exposure images before fusion, and use the SSIM (Structural Similarity) algorithm to combine multiple The fused image correspondingly extracts multiple structural information; this algorithm refers to the paper (K.Ma, K.Zeng and Z.Wang, "Perceptual Quality Assessment for Multi-Exposure Image Fusion," in IEEE Transactions on Image Processing, vol.24, no.11, pp.3345-3356, Nov.2015) in the structure fusion method to extract the structure diagram of the multi-exposure image, the structure diagram can be expressed as is the structure diagram to be requested, is the image structure diagram before normalization, ||·|| represents the modulus value; the image structure diagram in Represents the weight value of the kth image, s k represents the pixel value of the kth image, K represents the number of multi-exposure images, and its principle is to perform weighted processing on multi-exposure multiple images sk ; weight value in is the multi-exposure kth image pixel value, p value represents Structural strength of a local area; strength where the R value represents the continuity of the structure; structural continuity in is the multi-exposure kth image pixel value. According to the above method, the structure diagram of the multi-exposure image can be extracted. After obtaining the structure diagram of the multi-exposure image, calculate the local standard deviation of the structure diagram of the multi-exposure image as the structural information of the multi-exposure image; for the obtained structure diagram of the image before fusion and the image after fusion, we directly extract the local area Standard deviation, and covariance, with a local region size of 11×11. The extraction method of the fused image structure is as follows: the SSIM algorithm is used to extract the fused image.
所述彩色信息的提取方式为:计算图像IQ的饱和度SA,其中,R,G,B是彩色图像中红、绿、蓝三种色彩信息,μ是三种色彩信息的平均值;对于图像IQ为融合后图像的情况,将饱和度值SA作为彩色信息;对于图像IQ为多曝光图像的情况,选取多曝光图像中各图像对应的饱和度值SA的最大值,作为多曝光图像的彩色信息;The extraction method of the color information is: calculating the saturation SA of the image I Q , Among them, R, G, B are three kinds of color information of red, green and blue in the color image, and μ is the average value of the three kinds of color information; for the case where the image I Q is a fused image, the saturation value SA is used as the color information ; For the situation that image I Q is a multi-exposure image, select the maximum value of the saturation value SA corresponding to each image in the multi-exposure image, as the color information of the multi-exposure image;
相似度定义如下:其中,I1和I2是融合前后图像的参数,C是常数;当I1和I2分别为融合前后图像的纹理信息,则将获得的融合前后图像的纹理信息代入到相似度S公式中,获得纹理相似度TS;当I1和I2分别为融合前后图像的饱和度,将获得的饱和度代入到相似度S公式中,获得饱和度相似度SAS;利用结构相似度公式获得结构相似度SS,其中,σxy是融合前图像结构和融合后图像结构的协方差,σx是融合前图像结构标准差,σy是融合后图像结构标准差。相似度定义公式S中的常数C=0.001,结构相似度公式SS中的常数C2=0.001。Similarity is defined as follows: Among them, I 1 and I 2 are the parameters of the images before and after fusion, and C is a constant; when I 1 and I 2 are the texture information of the images before and after fusion respectively, then the obtained texture information of the images before and after fusion is substituted into the similarity S formula , to obtain the texture similarity TS; when I 1 and I 2 are the saturation of the image before and after fusion, the obtained saturation is substituted into the similarity S formula to obtain the saturation similarity SAS; using the structural similarity formula Obtain the structural similarity SS, where σ xy is the covariance of the image structure before fusion and the image structure after fusion, σ x is the standard deviation of the image structure before fusion, and σ y is the standard deviation of the image structure after fusion. The constant C=0.001 in the similarity definition formula S, and the constant C 2 =0.001 in the structural similarity formula SS.
步骤二、对多曝光融合图像进行质量评价时,采用融合算法对多曝光图像进行融合,生成待评价的融合后图像;Step 2. When evaluating the quality of the multi-exposure fused image, a fusion algorithm is used to fuse the multi-exposure images to generate a fused image to be evaluated;
融合前多曝光图像生成融合后图像的方式为:利用不同的融合算法,将多张融合前多曝光图像对应生成不同的融合后图像,至少有三张及以上的图像,融合后的图像数量与要评价的融合算法有关,一种融合算法对应一张融合后图像。The way to generate the fused image from the multi-exposure images before fusion is as follows: using different fusion algorithms, multiple multi-exposure images before fusion are correspondingly generated into different fused images, there are at least three or more images, and the number of fused images corresponds to It is related to the fusion algorithm evaluated, and one fusion algorithm corresponds to one fused image.
步骤三、对待评价的融合后图像以及该融合后图像对应的融合前多曝光图像,采用所述基于饱和度和小波系数的提取方式,分别提取纹理信息、结构信息和彩色信息;Step 3, the fused image to be evaluated and the pre-fused multi-exposure image corresponding to the fused image, using the extraction method based on saturation and wavelet coefficients, respectively extract texture information, structure information and color information;
步骤四、根据步骤三获得的融合前后图像的纹理信息、结构信息以及彩色信息,分别计算纹理相似度、结构相似度以及彩色相似度;Step 4. According to the texture information, structure information and color information of the images before and after fusion obtained in step 3, calculate texture similarity, structure similarity and color similarity respectively;
图2是待评价的融合后图像以及该融合后图像对应的融合前图像纹理相似度提取的过程。按照纹理信息的提取方式:首先将融合前后图像进行小波变换,这里选用Haar小波基;紧接着提取融合前图像Ic最丰富纹理信息为Vmax=[LHmax HLmax HHmax],然后提取融合后图像If的纹理信息VF=[LHf HLf HHf]。Fig. 2 is the process of extracting the texture similarity of the fused image to be evaluated and the corresponding pre-fused image texture of the fused image. According to the extraction method of texture information: firstly, wavelet transform is performed on the images before and after fusion, and the Haar wavelet base is selected here; then the most abundant texture information of the image Ic before fusion is extracted as V max = [LH max HL max HH max ], and then the fusion Texture information V F = [LH f HL f HH f ] of the subsequent image If.
步骤五、将步骤四获得的三个相似度作为特征值输入到训练好的ELM机器学习机,最终输入到ELM机器学习中的特征向量表示为[TS1TS2TS3SS SAS],获得待评价的融合后图像的评价结果。Step 5. Input the three similarities obtained in step 4 as feature values into the trained ELM machine learning machine, and finally input the feature vector into the ELM machine learning as [TS 1 TS 2 TS3SS SAS], and obtain the to-be-evaluated Evaluation results of the fused images.
自此,就完成了彩色多曝光融合图像的质量评价。Since then, the quality evaluation of color multi-exposure fusion images has been completed.
综上,以上仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明综上所述,以上仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。To sum up, the above are only preferred embodiments of the present invention, and are not intended to limit the protection scope of the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention should be included in the present invention. In summary, the above are only preferred embodiments of the present invention, and are not intended to limit the present invention. protection scope of the invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710052878.3A CN106886992A (en) | 2017-01-24 | 2017-01-24 | A kind of quality evaluating method of many exposure fused images of the colour based on saturation degree |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710052878.3A CN106886992A (en) | 2017-01-24 | 2017-01-24 | A kind of quality evaluating method of many exposure fused images of the colour based on saturation degree |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106886992A true CN106886992A (en) | 2017-06-23 |
Family
ID=59175437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710052878.3A Pending CN106886992A (en) | 2017-01-24 | 2017-01-24 | A kind of quality evaluating method of many exposure fused images of the colour based on saturation degree |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106886992A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680089A (en) * | 2017-10-09 | 2018-02-09 | 济南大学 | A kind of abnormal automatic judging method of ultra-high-tension power transmission line camera image |
CN108401154A (en) * | 2018-05-25 | 2018-08-14 | 同济大学 | A kind of image exposure degree reference-free quality evaluation method |
CN109448037A (en) * | 2018-11-14 | 2019-03-08 | 北京奇艺世纪科技有限公司 | A kind of image quality evaluating method and device |
CN109871852A (en) * | 2019-01-05 | 2019-06-11 | 天津大学 | A reference-free tone-mapping image quality assessment method |
CN110555891A (en) * | 2018-05-15 | 2019-12-10 | 北京连心医疗科技有限公司 | Imaging quality control method and device based on wavelet transformation and storage medium |
CN113409247A (en) * | 2021-04-15 | 2021-09-17 | 宁波大学 | Multi-exposure fusion image quality evaluation method |
CN113610863A (en) * | 2021-07-22 | 2021-11-05 | 东华理工大学 | Multi-exposure image fusion quality evaluation method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101201937A (en) * | 2007-09-18 | 2008-06-18 | 上海医疗器械厂有限公司 | Digital image enhancement method and device based on wavelet restruction and decompose |
CN101777060A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Automatic evaluation method and system of webpage visual quality |
CN101872479A (en) * | 2010-06-09 | 2010-10-27 | 宁波大学 | A Stereo Image Objective Quality Evaluation Method |
CN102170581A (en) * | 2011-05-05 | 2011-08-31 | 天津大学 | Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method |
-
2017
- 2017-01-24 CN CN201710052878.3A patent/CN106886992A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101201937A (en) * | 2007-09-18 | 2008-06-18 | 上海医疗器械厂有限公司 | Digital image enhancement method and device based on wavelet restruction and decompose |
CN101777060A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Automatic evaluation method and system of webpage visual quality |
CN101872479A (en) * | 2010-06-09 | 2010-10-27 | 宁波大学 | A Stereo Image Objective Quality Evaluation Method |
CN102170581A (en) * | 2011-05-05 | 2011-08-31 | 天津大学 | Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method |
Non-Patent Citations (5)
Title |
---|
KEDE MA等: "Perceptual Quality Assessment for Multi-Exposure Image Fusion", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
SHUIGEN WANG等: "NMF-Based Image Quality Assessment Using Extreme Learning Machine", 《IEEE TRANSACTIONS ON CYBERNETICS》 * |
ZHOU WANG等: "Image Quality Assessment: From Error Visibility to Structural Similarity", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
李卫中等: "细节保留的多曝光图像融合", 《光学精密工程》 * |
王水璋等: "基于小波变换的纹理特征提取", 《科技情报开发与经济》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680089A (en) * | 2017-10-09 | 2018-02-09 | 济南大学 | A kind of abnormal automatic judging method of ultra-high-tension power transmission line camera image |
CN110555891A (en) * | 2018-05-15 | 2019-12-10 | 北京连心医疗科技有限公司 | Imaging quality control method and device based on wavelet transformation and storage medium |
CN110555891B (en) * | 2018-05-15 | 2023-03-14 | 北京连心医疗科技有限公司 | Imaging quality control method and device based on wavelet transformation and storage medium |
CN108401154A (en) * | 2018-05-25 | 2018-08-14 | 同济大学 | A kind of image exposure degree reference-free quality evaluation method |
CN109448037A (en) * | 2018-11-14 | 2019-03-08 | 北京奇艺世纪科技有限公司 | A kind of image quality evaluating method and device |
CN109448037B (en) * | 2018-11-14 | 2020-11-03 | 北京奇艺世纪科技有限公司 | Image quality evaluation method and device |
CN109871852A (en) * | 2019-01-05 | 2019-06-11 | 天津大学 | A reference-free tone-mapping image quality assessment method |
CN109871852B (en) * | 2019-01-05 | 2023-05-26 | 天津大学 | A No-reference Tone Mapping Image Quality Evaluation Method |
CN113409247A (en) * | 2021-04-15 | 2021-09-17 | 宁波大学 | Multi-exposure fusion image quality evaluation method |
CN113409247B (en) * | 2021-04-15 | 2022-07-15 | 宁波大学 | Multi-exposure fusion image quality evaluation method |
CN113610863A (en) * | 2021-07-22 | 2021-11-05 | 东华理工大学 | Multi-exposure image fusion quality evaluation method |
CN113610863B (en) * | 2021-07-22 | 2023-08-04 | 东华理工大学 | Multi-exposure image fusion quality assessment method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106886992A (en) | A kind of quality evaluating method of many exposure fused images of the colour based on saturation degree | |
CN110046673B (en) | No-reference tone mapping image quality evaluation method based on multi-feature fusion | |
CN106127741B (en) | Non-reference picture quality appraisement method based on improvement natural scene statistical model | |
CN111047543B (en) | Image enhancement method, device and storage medium | |
CN104023230B (en) | A kind of non-reference picture quality appraisement method based on gradient relevance | |
CN111079740A (en) | Image quality evaluation method, electronic device, and computer-readable storage medium | |
CN104361593B (en) | A Color Image Quality Evaluation Method Based on HVS and Quaternion | |
CN106127718B (en) | A kind of more exposure image fusion methods based on wavelet transformation | |
CN116681636B (en) | Light infrared and visible light image fusion method based on convolutional neural network | |
CN110706196B (en) | Clustering perception-based no-reference tone mapping image quality evaluation algorithm | |
CN108961227B (en) | Image quality evaluation method based on multi-feature fusion of airspace and transform domain | |
WO2018058090A1 (en) | Method for no-reference image quality assessment | |
CN106127234B (en) | A no-reference image quality assessment method based on feature dictionary | |
CN104318545B (en) | A kind of quality evaluating method for greasy weather polarization image | |
CN107743225B (en) | A Method for No-Reference Image Quality Prediction Using Multi-Layer Depth Representations | |
CN107610093B (en) | Full-reference image quality assessment method based on similarity feature fusion | |
CN116664462B (en) | Infrared and visible light image fusion method based on MS-DSC and I_CBAM | |
WO2022126674A1 (en) | Method and system for evaluating quality of stereoscopic panoramic image | |
CN114529946A (en) | Pedestrian re-identification method, device, equipment and storage medium based on self-supervision learning | |
CN110910347B (en) | A No-Reference Quality Assessment Method for Tone Mapping Images Based on Image Segmentation | |
CN111597933A (en) | Face recognition method and device | |
CN104392233A (en) | Image saliency map extracting method based on region | |
CN104346809A (en) | Image quality evaluation method for image quality dataset adopting high dynamic range | |
CN107371015A (en) | A No-reference Contrast Variation Image Quality Evaluation Method | |
CN115578614B (en) | Training method of image processing model, image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170623 |