CN106295679B - A kind of color image light source colour estimation method based on category correction - Google Patents

A kind of color image light source colour estimation method based on category correction Download PDF

Info

Publication number
CN106295679B
CN106295679B CN201610606092.7A CN201610606092A CN106295679B CN 106295679 B CN106295679 B CN 106295679B CN 201610606092 A CN201610606092 A CN 201610606092A CN 106295679 B CN106295679 B CN 106295679B
Authority
CN
China
Prior art keywords
light source
image
feature
training
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610606092.7A
Other languages
Chinese (zh)
Other versions
CN106295679A (en
Inventor
李永杰
张明
高绍兵
任燕泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610606092.7A priority Critical patent/CN106295679B/en
Publication of CN106295679A publication Critical patent/CN106295679A/en
Application granted granted Critical
Publication of CN106295679B publication Critical patent/CN106295679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The color image light source colour estimation method based on category correction that the invention discloses a kind of, the edge feature of image is extracted first on the image of one group of known luminaire color, then learnt by least square method, obtain the correction matrix between edge feature and light source, edge feature is extracted to test image to be processed again and is multiplied with correction matrix, rough light source estimation is obtained;Accurately light source estimation is obtained to relearn with one kind training image similar in test image feature to be processed finding in such a way that feature space finds K width adjacent image later.It is few that the present invention relates to parameters, and since the feature of extraction is simple and negligible amounts, so also possessing the features such as calculating is simple, speed is fast;In addition, the present invention is the method based on study, so high treating effect, accuracy is high, is very suitable for the relatively high occasion of the accuracy of estimation requirement to light source colour.

Description

一种基于分类校正的彩色图像光源颜色估计方法A Color Image Light Source Color Estimation Method Based on Classification Correction

技术领域technical field

本发明属于计算机视觉和图像处理技术领域,具体涉及一种基于分类校正的彩色图像光源颜色估计方法的设计。The invention belongs to the technical field of computer vision and image processing, in particular to the design of a color image light source color estimation method based on classification correction.

背景技术Background technique

自然环境下,同一物体在不同颜色的光的照射下会呈现不同的颜色,比如绿色的树叶在晨光照射下偏黄色,而在傍晚时分却偏蓝色。人的视觉系统可以抵制这种光源颜色变化,从而恒定的感知物体的颜色,也就是视觉系统具有颜色恒常性。然而,受技术条件限制,机器并不具备这一能力,由物理设备,比如照相机拍摄到的图片由于光源颜色的变化会产生严重的色偏。因此,如何根据已有的图像信息去准确估计场景中的光源颜色并将其移除从而得到物体在标准白光照射下的颜色就显得尤为重要。In the natural environment, the same object will show different colors under the illumination of different colors of light. For example, green leaves are yellowish in the morning light, but bluish in the evening. The human visual system can resist the color change of the light source, so as to constantly perceive the color of the object, that is, the visual system has color constancy. However, due to technical limitations, machines do not have this capability, and pictures captured by physical devices, such as cameras, will have serious color shifts due to changes in the color of the light source. Therefore, how to accurately estimate the color of the light source in the scene and remove it according to the existing image information to obtain the color of the object under the illumination of standard white light is particularly important.

计算性颜色恒常正是致力于解决这一问题,它的主要目的是计算任意一幅图像所包含的未知光源的颜色,然后用这个计算得到的光源颜色对原始输入的图像进行光源颜色校正后在标准的白光下进行显示,得到所谓的标准图像。由于标准图像去除了光源颜色的影响,因而对于后续的计算任务,比如基于颜色的场景分类,图像检索就不存在因色偏而导致的误分类或误检索。Computational color constancy is dedicated to solving this problem. Its main purpose is to calculate the color of the unknown light source contained in any image, and then use the calculated light source color to perform light source color correction on the original input image. Displayed under standard white light, a so-called standard image is obtained. Since the standard image removes the influence of the color of the light source, for subsequent computing tasks, such as color-based scene classification, there is no mis-classification or mis-retrieval caused by color casts in image retrieval.

计算性颜色恒常方法可以分为两类:基于学习的方法和传统的静态方法。传统的静态方法从图像中提取简单的特征用于光源估计,这类方法由于估计的误差较大,不能很好的满足工程需要。基于此需求,在传统非基于学习的方法的基础上诞生了基于学习的方法,比较典型的基于学习的方法有G.D.Finlayson在2013年提出的方法,参考文献:G.D.Finlayson,“Corrected-moment illuminant estimation”in Proc.Comput.Vis.IEEEInt.Conf.,2013,pp.1904–1911,该方法通过特征提取并利用回归的方法找到特征与光源之间的关系。这种方法由于使用了回归的手段,因而估计光源的准确性相对较高,但该方法对所有图像都使用同一个校正矩阵,导致部分图像估计的光源误差很大,因而无法满足对估计光源颜色准确性要求很高的场合,例如智能机器人或自动驾驶等的接收图像的设备前端。因此,实现一种对不同的图像学习不同的校正矩阵的方法就显得尤为重要。Computational color constancy methods can be divided into two categories: learning-based methods and traditional static methods. Traditional static methods extract simple features from images for light source estimation, but these methods cannot meet engineering needs well due to large estimation errors. Based on this requirement, the learning-based method was born on the basis of the traditional non-learning-based method. The more typical learning-based method is the method proposed by G.D.Finlayson in 2013. Reference: G.D.Finlayson, "Corrected-moment illuminant estimation "in Proc.Comput.Vis.IEEEInt.Conf., 2013, pp.1904–1911, this method finds the relationship between features and light sources through feature extraction and regression. Since this method uses regression, the accuracy of estimating the light source is relatively high, but this method uses the same correction matrix for all images, resulting in a large error in the light source estimated for some images, so it cannot satisfy the estimation of the color of the light source. In applications where high accuracy is required, such as the front end of the device that receives images, such as intelligent robots or autonomous driving. Therefore, it is particularly important to implement a method of learning different correction matrices for different images.

发明内容SUMMARY OF THE INVENTION

本发明的目的是为了解决现有技术中的图像场景光源颜色估计方法无法满足对估计光源颜色准确性要求很高的场合的问题,提出了一种基于分类校正的彩色图像光源颜色估计方法。The purpose of the present invention is to solve the problem that the prior art method for estimating the light source color of an image scene cannot meet the situation where the accuracy of the estimated light source color is very high, and proposes a color image light source color estimation method based on classification correction.

本发明的技术方案为:一种基于分类校正的彩色图像光源颜色估计方法,包括以下步骤:The technical scheme of the present invention is: a color image light source color estimation method based on classification correction, comprising the following steps:

S1、提取训练图像的边缘特征:将N幅已知光源的彩色图像作为原始训练集T,分别与高斯分布求导后的模板G做卷积运算,得到图像每个像素点对应的边缘值,提取边缘特征,得到N幅训练图像的边缘特征矩阵M;S1. Extract the edge features of the training image: Take N color images of known light sources as the original training set T, and perform convolution operations with the template G after the derivation of the Gaussian distribution to obtain the edge value corresponding to each pixel of the image, Extract edge features to obtain edge feature matrix M of N training images;

S2、学习校正矩阵:通过最小二乘法,学习由步骤S1计算得到的特征矩阵M与N幅训练图像的标准光源L之间的校正矩阵C;S2, learning the correction matrix: through the least squares method, learn the correction matrix C between the feature matrix M calculated in step S1 and the standard light source L of the N training images;

S3、粗略的光源估计:采用步骤S1中的方法提取测试图像的边缘特征,与步骤S2学习得到的校正矩阵C相乘,得到粗略的光源估计结果L1;S3, rough light source estimation: using the method in step S1 to extract the edge feature of the test image, and multiplying it with the correction matrix C learned in step S2 to obtain a rough light source estimation result L1;

S4、寻找与测试图像相对应的训练图像:对测试图像与原始训练集T分别去除光源,再采用S1中的方法分别提取边缘特征,形成特征空间;在特征空间中找出与测试图像特征相近的K幅训练图像,将其作为新的训练集TN;S4. Find the training image corresponding to the test image: remove the light source from the test image and the original training set T respectively, and then use the method in S1 to extract the edge features respectively to form a feature space; find the features similar to the test image in the feature space The K training images are used as the new training set TN;

S5、精准的光源估计:重复步骤S1-S4,每次将步骤S1中的训练集T替换为步骤S4中得到的新的训练集TN,训练图像数也相应的由N变为K,直到步骤S4中得到的TN与上一次操作中步骤S4得到的TN相同为止,把最后一次操作中步骤S3得到的光源估计结果L1作为最终光源估计结果。S5. Accurate light source estimation: Repeat steps S1-S4, each time the training set T in step S1 is replaced with the new training set TN obtained in step S4, and the number of training images is correspondingly changed from N to K, until step S5. The TN obtained in S4 is the same as the TN obtained in step S4 in the previous operation, and the light source estimation result L1 obtained in step S3 in the last operation is used as the final light source estimation result.

进一步地,步骤S1中高斯分布求导后的模板G为高斯梯度算子。Further, the template G obtained from the derivation of the Gaussian distribution in step S1 is a Gaussian gradient operator.

进一步地,步骤S1中提取边缘特征的计算公式为:Further, the calculation formula for extracting edge features in step S1 is:

式中Ri、Gi、Bi分别表示每个像素点在R、G、B三个通道的边缘值,N1表示图像中像素点的个数,Mxyz为不同x,y,z下对应的边缘特征的值,x,y,z是满足x≥0,y≥0,z≥0且x+y+z=3的所有组合。In the formula, R i , G i , and B i represent the edge values of each pixel in the R, G, and B channels, respectively, N 1 represents the number of pixels in the image, and M xyz is the difference between x, y, and z. The values of the corresponding edge features, x, y, and z are all combinations that satisfy x≥0, y≥0, z≥0 and x+y+z=3.

进一步地,步骤S4中K的取值范围为 Further, the value range of K in step S4 is

进一步地,步骤S4具体包括以下分步骤:Further, step S4 specifically includes the following sub-steps:

S41、对原始N幅训练图像去除标准光源L,并采用步骤S1中的方法提取边缘特征;S41, removing the standard light source L from the original N training images, and using the method in step S1 to extract edge features;

S42、对测试图像去除步骤S3中粗略估计的光源L1,并采用步骤S1中的方法提取边缘特征,与步骤S41中N幅训练图像提取的边缘特征共同形成特征空间;S42, remove the light source L1 roughly estimated in step S3 to the test image, and adopt the method in step S1 to extract edge features, and form a feature space together with the edge features extracted from N pieces of training images in step S41;

S43、在特征空间中找出与测试图像特征距离最相近的K幅图像,作为测试图像的新训练图像集合TN。S43 , find out the K images that are closest to the feature distance of the test image in the feature space, and use them as a new training image set TN of the test image.

进一步地,步骤S43中的特征距离为欧式距离。Further, the feature distance in step S43 is the Euclidean distance.

本发明的有益效果是:本发明首先在一组已知光源颜色的图像上提取图像的边缘特征,然后通过最小二乘法进行学习,得到边缘特征与光源之间的校正矩阵,再对待处理的测试图像提取边缘特征并与校正矩阵相乘,得到粗略的光源估计;之后通过在特征空间寻找K幅邻近图像的方式找到与待处理测试图像特征相近的一类训练图像,从而重新学习,得到精准的光源估计。由于待处理的测试图像与训练图像在特征空间的距离不同,适当调节对应训练图像数K的值,可以得到更好的适于不同类型训练图像的结果,这里K是唯一的参数。本发明涉及参数少(仅有一个参数K),并且由于提取的特征简单且数量较少,所以还拥有计算简单、速度快等特点;此外,本发明是基于学习的方法,所以处理效果好,精确度高,非常适合于对光源颜色的估计准确度要求比较高的场合,例如内置在智能机器人或自动驾驶的接收图像设备的前端等。The beneficial effects of the present invention are as follows: the present invention first extracts the edge features of the images from a group of images with known light source colors, and then learns through the least squares method to obtain the correction matrix between the edge features and the light source, and then tests the to-be-processed test. The edge features are extracted from the image and multiplied by the correction matrix to obtain a rough light source estimate; then, a class of training images similar to the characteristics of the test image to be processed is found by searching for K adjacent images in the feature space, so as to relearn and obtain accurate Light source estimation. Since the distance between the test image to be processed and the training image in the feature space is different, by appropriately adjusting the value of the number K corresponding to the training image, a better result suitable for different types of training images can be obtained, where K is the only parameter. The present invention involves few parameters (only one parameter K), and because the extracted features are simple and small in number, it also has the characteristics of simple calculation and fast speed; in addition, the present invention is based on a learning method, so the processing effect is good, High accuracy, it is very suitable for occasions where the estimation accuracy of light source color is relatively high, such as built-in intelligent robots or the front end of automatic driving image receiving equipment.

附图说明Description of drawings

图1为本发明提供的一种基于分类校正的彩色图像光源颜色估计方法流程图。FIG. 1 is a flow chart of a color image light source color estimation method based on classification correction provided by the present invention.

图2为本发明实施例一的待处理的测试图像tools_ph-ulm.tif。FIG. 2 is a to-be-processed test image tools_ph-ulm.tif according to Embodiment 1 of the present invention.

图3为本发明实施例一的各步骤估计的光源与真实光源之间的误差值示意图。FIG. 3 is a schematic diagram of the error value between the estimated light source and the real light source in each step of Embodiment 1 of the present invention.

图4为本发明实施例一的最终光源估计结果与真实光源对比示意图。FIG. 4 is a schematic diagram showing the comparison between the final light source estimation result and the real light source according to the first embodiment of the present invention.

图5为本发明实施例二的利用步骤S5计算的光源颜色值对原始测试图像进行色调校正后的结果示意图。FIG. 5 is a schematic diagram of the result of performing tone correction on the original test image by using the color value of the light source calculated in step S5 according to the second embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的实施例作进一步的说明。The embodiments of the present invention will be further described below with reference to the accompanying drawings.

本发明提供了一种基于分类校正的彩色图像光源颜色估计方法,如图1所示,包括以下步骤:The present invention provides a color image light source color estimation method based on classification correction, as shown in FIG. 1 , including the following steps:

S1、提取训练图像的边缘特征:将N幅已知光源的彩色图像作为原始训练集T,分别与高斯分布求导后的模板G做卷积运算,得到图像每个像素点对应的边缘值,提取边缘特征,得到N幅训练图像的边缘特征矩阵M。S1. Extract the edge features of the training image: Take N color images of known light sources as the original training set T, and perform convolution operations with the template G after the derivation of the Gaussian distribution to obtain the edge value corresponding to each pixel of the image, Extract edge features to obtain edge feature matrix M of N training images.

其中,高斯分布求导后的模板G为高斯梯度算子。Among them, the template G after the derivation of the Gaussian distribution is the Gaussian gradient operator.

提取边缘特征的计算公式为:The calculation formula for extracting edge features is:

式中Ri、Gi、Bi分别表示每个像素点在R、G、B三个通道的边缘值,N1表示图像中像素点的个数,Mxyz为不同x,y,z下对应的边缘特征的值,x,y,z是满足x≥0,y≥0,z≥0且x+y+z=3的所有组合,总的组合数为19,所以这里可以得到19个边缘特征。In the formula, R i , G i , and B i represent the edge values of each pixel in the R, G, and B channels, respectively, N 1 represents the number of pixels in the image, and M xyz is the difference between x, y, and z. The values of the corresponding edge features, x, y, z are all combinations that satisfy x≥0, y≥0, z≥0 and x+y+z=3, the total number of combinations is 19, so 19 can be obtained here edge features.

S2、学习校正矩阵:通过最小二乘法,学习由步骤S1计算得到的特征矩阵M与N幅训练图像的标准光源L之间的校正矩阵C。S2. Learning the correction matrix: learn the correction matrix C between the feature matrix M calculated in step S1 and the standard light source L of the N training images through the least square method.

S3、粗略的光源估计:采用步骤S1中的方法提取测试图像的边缘特征,与步骤S2学习得到的校正矩阵C相乘,得到粗略的光源估计结果L1。S3. Rough light source estimation: use the method in step S1 to extract the edge features of the test image, and multiply with the correction matrix C learned in step S2 to obtain a rough light source estimation result L1.

S4、寻找与测试图像相对应的训练图像:对测试图像与原始训练集T分别去除光源,再采用S1中的方法分别提取边缘特征,形成特征空间;在特征空间中找出与测试图像特征相近的K幅训练图像(K的取值范围为),将其作为新的训练集TN。S4. Find the training image corresponding to the test image: remove the light source from the test image and the original training set T respectively, and then use the method in S1 to extract the edge features respectively to form a feature space; find the features similar to the test image in the feature space K training images (the value range of K is ) as the new training set TN.

该步骤具体包括以下分步骤:This step specifically includes the following sub-steps:

S41、对原始N幅训练图像去除标准光源L,并采用步骤S1中的方法提取边缘特征。S41 , remove the standard light source L from the original N training images, and extract edge features by using the method in step S1 .

S42、对测试图像去除步骤S3中粗略估计的光源L1,并采用步骤S1中的方法提取边缘特征,与步骤S41中N幅训练图像提取的边缘特征共同形成特征空间。S42, remove the light source L1 roughly estimated in step S3 from the test image, and extract edge features using the method in step S1, and form a feature space together with the edge features extracted from the N training images in step S41.

S43、在特征空间中找出与测试图像特征距离最相近的K幅图像,作为测试图像的新训练图像集合TN。S43 , find out the K images that are closest to the feature distance of the test image in the feature space, and use them as a new training image set TN of the test image.

其中,步骤S43中的特征距离为欧式距离。Wherein, the characteristic distance in step S43 is the Euclidean distance.

S5、精准的光源估计:重复步骤S1-S4,每次将步骤S1中的训练集T替换为步骤S4中得到的新的训练集TN,训练图像数也相应的由N变为K,直到步骤S4中得到的TN与上一次操作中步骤S4得到的TN相同为止,把最后一次操作中步骤S3得到的光源估计结果L1作为最终光源估计结果。S5. Accurate light source estimation: Repeat steps S1-S4, each time the training set T in step S1 is replaced with the new training set TN obtained in step S4, and the number of training images is correspondingly changed from N to K, until step S5. The TN obtained in S4 is the same as the TN obtained in step S4 in the previous operation, and the light source estimation result L1 obtained in step S3 in the last operation is used as the final light source estimation result.

经过步骤S5之后计算出来的图像的最终光源估计结果L1可以直接用于后续的计算机视觉应用,比如用输入的原彩色图像的每个颜色通道的分量除以L1,可以达到去除彩色图像中光源颜色的目的。此外,图像的白平衡以及颜色校正也需要用到S5计算出来的最终光源估计结果L1。The final light source estimation result L1 of the image calculated after step S5 can be directly used for subsequent computer vision applications. For example, dividing the component of each color channel of the input primary color image by L1 can remove the color of the light source in the color image. the goal of. In addition, the white balance and color correction of the image also need to use the final light source estimation result L1 calculated by S5.

下面以一个具体实施例对本发明提供的一种基于分类校正的彩色图像光源颜色估计方法作进一步说明:A method for estimating light source color of a color image based on classification correction provided by the present invention is further described below with a specific embodiment:

实施例一:Example 1:

下载目前国际公认的用于估计场景光源颜色的图像库SFU object的所有图片(共321幅)及其对应的真实光源颜色(标准光源)L,图像大小均为468×637,选图像库前214幅图像作为训练集图像,选择剩余图像中一幅图像tools_ph-ulm.tif(如图2所示)作为待处理的测试图像进行测试,所有图像都没有经过任何相机本身的预处理(如色调校正,gamma值校正)。则本发明的详细步骤如下:Download all the images (321 in total) of the currently internationally recognized image library SFU object for estimating the color of the scene light source and their corresponding real light source color (standard light source) L. The image size is 468×637, and the first 214 images in the image library are selected. tif (as shown in Figure 2) is selected as the test image to be processed for testing, and all images have not undergone any preprocessing of the camera itself (such as tone correction) , gamma correction). Then the detailed steps of the present invention are as follows:

S1、提取训练图像的边缘特征:将214幅已知光源的彩色图像作为原始训练集T,分别与高斯分布求导后的模板G(高斯梯度算子)做卷积运算,得到图像每个像素点对应的边缘值,再分别提取19维的边缘特征,最后得到大小为214*19的训练集图像的边缘特征矩阵M。S1. Extract the edge features of the training image: Take 214 color images of known light sources as the original training set T, and perform convolution operations with the template G (Gaussian gradient operator) after the derivation of the Gaussian distribution to obtain each pixel of the image. The edge value corresponding to the point, and then extract the 19-dimensional edge features, and finally obtain the edge feature matrix M of the training set image with a size of 214*19.

S2、学习校正矩阵:通过最小二乘法,学习由步骤S1计算得到的特征矩阵M与214幅训练图像的标准光源L之间的校正矩阵,得到大小为19*3的校正矩阵C:S2. Learning the correction matrix: Through the least squares method, learn the correction matrix between the feature matrix M calculated in step S1 and the standard light source L of the 214 training images, and obtain a correction matrix C with a size of 19*3:

C=[-150.0689,-30.1462,-21.5186;-96.5582,-196.1642,-348.5298;52.6551,76.4461,115.5982;-200.5289,-240.3650,-179.6495;-79.6311,72.4539,125.1126;-56.1276,-130.2963,-226.1518;683.9180,552.9035,366.8781;214.1444,-15.5379,-52.8198;-149.3407,138.0260,397.1888;154.6218,240.3336,128.2161;156.5752,-50.6503,69.4182;22.7103,90.3730,274.5781;-65.9786,-384.7642,-66.2556;-112.7044,-104.0913,-12.8868;-349.7427,81.5115,-215.8972;-79.0109,-48.0727,-32.2072;-98.2723,-22.7039,-51.2091;108.6481,-52.0896,-265.9989;172.6056,171.2726,95.1991]。C=[-150.0689,-30.1462,-21.5186;-96.5582,-196.1642,-348.5298;52.6551,76.4461,115.5982;-200.5289,-240.3650,-179.6495;-79.6311,72.4539,125.1126;-56.1276,-130.2963,- 226.1518;683.9180,552.9035,366.8781;214.1444,-15.5379,-52.8198;-149.3407,138.0260,397.1888;154.6218,240.3336,128.2161;156.5752,-50.6503,69.4182;22.7103,90.3730,274.5781;-65.9786,-384.7642,-66.2556 ;-112.7044,-104.0913,-12.8868;-349.7427,81.5115,-215.8972;-79.0109,-48.0727,-32.2072;-98.2723,-22.7039,-51.2091;108.6481,-52.0896,-265.9989;172.6056,171.2726,95.1991] .

S3、粗略的光源估计:采用步骤S1中的方法提取测试图像的19维边缘特征,得到大小为1*19的测试图像的边缘特征矩阵M1:S3. Rough light source estimation: The method in step S1 is used to extract the 19-dimensional edge features of the test image, and the edge feature matrix M1 of the test image with a size of 1*19 is obtained:

M1=[0.0002,0.0004,0.0002,0.0014,0.0017,0.0012,0.0015,0.0013,0.0014,0.0036,0.0040,0.0031,0.0037,0.0034,0.0039,0.0037,0.0032,0.0034,0.0035]。M1 = [0.0002, 0.0004, 0.0002, 0.0014, 0.0017, 0.0012, 0.0015, 0.0013, 0.0014, 0.0036, 0.0040, 0.0031, 0.0037, 0.0034, 0.0039, 0.0037, 0.0032, 0.0].

再将M1与步骤S2学习得到的校正矩阵C相乘,得到粗略的光源估计结果L1=[0.1985,0.2151,0.2360]。Then multiply M1 by the correction matrix C learned in step S2 to obtain a rough light source estimation result L1=[0.1985, 0.2151, 0.2360].

S4、寻找与测试图像相对应的训练图像:对测试图像与具有214幅图像的原始训练集T分别去除光源,再采用S1中的方法分别提取19维边缘特征,形成特征空间。在特征空间中找出与测试图像特征相近的K幅训练图像,从而得到与其特征相近的一类图像,将这K幅图像作为新的训练集TN。本发明实施例中,选取K=100。S4. Find the training image corresponding to the test image: remove the light source from the test image and the original training set T with 214 images respectively, and then use the method in S1 to extract 19-dimensional edge features to form a feature space. Find K training images with similar characteristics to the test image in the feature space, so as to obtain a class of images with similar characteristics, and use these K images as a new training set TN. In the embodiment of the present invention, K=100 is selected.

该步骤具体包括以下分步骤:This step specifically includes the following sub-steps:

S41、对原始214幅训练图像去除标准光源L,并采用步骤S1中的方法提取19维边缘特征,得到大小为214*19的特征矩阵M0。S41 , remove the standard light source L from the original 214 training images, and extract 19-dimensional edge features by using the method in step S1 to obtain a feature matrix M0 with a size of 214*19.

S42、对测试图像去除步骤S3中粗略估计的光源L1,并采用步骤S1中的方法提取边缘特征,得到大小为1*19的特征矩阵M2:S42, remove the light source L1 roughly estimated in step S3 from the test image, and use the method in step S1 to extract edge features to obtain a feature matrix M2 with a size of 1*19:

M2=[0.0060,0.0089,0.0038,0.0366,0.0371,0.0224,0.0365,0.0282,0.0285,0.0937,0.0900,0.0581,0.0921,0.0790,0.0909,0.0768,0.0673,0.0664,0.0777]。M2 = [0.0060, 0.0089, 0.0038, 0.0366, 0.0371, 0.0224, 0.0365, 0.0282, 0.0285, 0.0937, 0.0900, 0.0581, 0.0921, 0.0790, 0.0909, 0.0768, 0.0673, 0.0664, 0.0673].

再将M2与步骤S41中214幅训练图像提取的边缘特征M0共同形成特征空间。Then, M2 and the edge features M0 extracted from the 214 training images in step S41 together form a feature space.

S43、在特征空间中找出与测试图像特征距离最相近的100幅图像,作为测试图像的新训练图像集合TN。S43 , find out 100 images with the closest feature distance to the test image in the feature space, and use them as a new training image set TN of the test images.

S5、精准的光源估计:重复步骤S1-S4,每次将步骤S1中的训练集T替换为步骤S4中得到的新的训练集TN,训练图像数也相应的由N变为K,直到步骤S4中得到的TN与上一次操作中步骤S4得到的TN相同为止,把最后一次操作中步骤S3得到的光源估计结果L1作为最终光源估计结果。S5. Accurate light source estimation: Repeat steps S1-S4, each time the training set T in step S1 is replaced with the new training set TN obtained in step S4, and the number of training images is correspondingly changed from N to K, until step S5. The TN obtained in S4 is the same as the TN obtained in step S4 in the previous operation, and the light source estimation result L1 obtained in step S3 in the last operation is used as the final light source estimation result.

本发明实施例中,为节约时间,重复操作两次即可。重复操作一次后得到的光源估计结果为L1=[0.3412,0.3591,0.3168],重复操作两次后得到的光源估计结果为L1=[0.3312,0.3365,0.3430]。将执行两次后得到的光源估计结果L1=[0.3312,0.3365,0.3430]作为最终光源估计结果。In the embodiment of the present invention, in order to save time, the operation may be repeated twice. The light source estimation result obtained after repeating the operation once is L1=[0.3412, 0.3591, 0.3168], and the light source estimation result obtained after repeating the operation twice is L1=[0.3312, 0.3365, 0.3430]. The light source estimation result L1=[0.3312, 0.3365, 0.3430] obtained after executing twice is used as the final light source estimation result.

如图3所示,第一个柱子表示步骤S3中粗略估计的光源与真实光源之间角度误差值,第二个柱子是步骤S5中重复操作一次之后估计的光源与真实光源之间的角度误差值,第三个柱子是步骤S5中重复操作两次之后估计的光源与真实光源之间的角度误差值。三个柱子之间连接的折线反应了估计误差的下降趋势,表明估计的光源越来越准确。As shown in Figure 3, the first column represents the angle error value between the roughly estimated light source and the real light source in step S3, and the second column is the angle error between the estimated light source and the real light source after repeating the operation in step S5 once value, the third column is the angle error value between the estimated light source and the real light source after repeating the operation twice in step S5. The broken line connecting the three columns reflects the decreasing trend of estimation error, indicating that the estimated light source is getting more and more accurate.

如图4所示为步骤S5最终计算得到的三原色空间下红色和绿色分量的响应的方向与真实光源红色和绿色分量的响应的方向,图4表明由步骤S5计算得到的响应值与真实场景光源颜色的信息很接近。Figure 4 shows the direction of the response of the red and green components in the three primary color space finally calculated in step S5 and the direction of the response of the red and green components of the real light source. Figure 4 shows the response value calculated in step S5 and the real scene light source. The color information is close.

下面以一个具体实施例对本发明最终得到的光源估计结果以图像的色调校正为例作一个实际应用时的简单示范:The following is a simple demonstration of the actual application of the light source estimation result finally obtained by the present invention, taking the tone correction of an image as an example with a specific embodiment:

实施例二:Embodiment 2:

利用步骤S5计算得到的各个颜色分量下的光源颜色值,分别校正原始输入图像的每个颜色分量的像素值。以步骤S3中输入的测试图像的一个像素点(0.335,0.538,0.601)为例,其校正后的结果为(0.335/0.3312,0.538/0.3365,0.601/0.3430)=(1.0115,1.5988,1.7522),归一化处理之后变为(0.2319,0.3665,0.4016),然后将校正后的值乘上标准白光系数得到(0.1339,0.2116,0.2319)作为最终输出的校正图像的像素值,原始输入图像的其它像素点也做类似的计算,最后得到校正后的彩色图像,如图5所示。The pixel value of each color component of the original input image is corrected respectively by using the light source color value under each color component calculated in step S5. Taking a pixel (0.335, 0.538, 0.601) of the test image input in step S3 as an example, the corrected result is (0.335/0.3312, 0.538/0.3365, 0.601/0.3430)=(1.0115, 1.5988, 1.7522), After normalization, it becomes (0.2319, 0.3665, 0.4016), and then the corrected value is multiplied by the standard white light coefficient Obtain (0.1339, 0.2116, 0.2319) as the pixel value of the final output corrected image, and do similar calculations for other pixels of the original input image, and finally obtain the corrected color image, as shown in Figure 5.

本领域的普通技术人员将会意识到,这里所述的实施例是为了帮助读者理解本发明的原理,应被理解为本发明的保护范围并不局限于这样的特别陈述和实施例。本领域的普通技术人员可以根据本发明公开的这些技术启示做出各种不脱离本发明实质的其它各种具体变形和组合,这些变形和组合仍然在本发明的保护范围内。Those of ordinary skill in the art will appreciate that the embodiments described herein are intended to assist readers in understanding the principles of the present invention, and it should be understood that the scope of protection of the present invention is not limited to such specific statements and embodiments. Those skilled in the art can make various other specific modifications and combinations without departing from the essence of the present invention according to the technical teaching disclosed in the present invention, and these modifications and combinations still fall within the protection scope of the present invention.

Claims (6)

1. a kind of color image light source colour estimation method based on category correction, which comprises the following steps:
S1, extract training image edge feature: using the color image of N width known luminaire as original training set T, respectively with height Template G after this distribution derivation does convolution algorithm, obtains the corresponding marginal value of each pixel of image, extracts edge feature, obtain To the edge feature matrix M of N width training image;
S2, learning correction matrix: by least square method, study is schemed by the step S1 eigenmatrix M being calculated and the training of N width Correction matrix C between the standard sources L of picture;
S3, rough light source estimation: the edge feature of test image is extracted using the method in step S1, is learnt with step S2 The correction matrix C arrived is multiplied, and obtains rough light source estimated result L1;
S4, searching training image corresponding with test image: removing light source to test image and original training set T respectively, then Edge feature is extracted using the method in S1 respectively, forms feature space;It is found out in feature space with test image feature most Similar K width training image, as new training set TN;
S5, accurately light source are estimated: repeating step S1-S4, replace in step S4 the training set T in step S1 obtain every time New training set TN, training image number also becomes K from N accordingly, in TN and last operation obtained in the step S4 Until the TN that step S4 is obtained is identical, using the light source estimated result L1 that step S3 is obtained in last time operation as final light source Estimated result.
2. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute Stating the template G in step S1 after Gaussian Profile derivation is Gauss gradient operator.
3. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute State the calculation formula that edge feature is extracted in step S1 are as follows:
R in formulai、Gi、BiEach pixel is respectively indicated in the marginal value in tri- channels R, G, B, N1Indicate pixel in image Number, MxyzFor different x, the value of corresponding edge feature under y, z, x, y, z is to meet x >=0, y >=0, z >=0 and x+y+z=3's All combinations.
4. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute The value range for stating K in step S4 is
5. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute State step S4 specifically include it is following step by step:
S41, standard sources L is removed to original N width training image, and edge feature is extracted using the method in step S1;
S42, the light source L1 that rough estimate in step S3 is removed to test image, and edge spy is extracted using the method in step S1 Feature space is collectively formed in sign, the edge feature extracted with N width training image in step S41;
S43, it is found out in feature space with the most similar K width image of test image characteristic distance, new instruction as test image Practice image collection TN.
6. the color image light source colour estimation method according to claim 5 based on category correction, which is characterized in that institute Stating the characteristic distance in step S43 is Euclidean distance.
CN201610606092.7A 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction Active CN106295679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610606092.7A CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610606092.7A CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Publications (2)

Publication Number Publication Date
CN106295679A CN106295679A (en) 2017-01-04
CN106295679B true CN106295679B (en) 2019-06-25

Family

ID=57663052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610606092.7A Active CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Country Status (1)

Country Link
CN (1) CN106295679B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060308B (en) * 2019-03-28 2021-02-02 杭州电子科技大学 Color constancy method based on light source color distribution limitation
CN112995634B (en) * 2021-04-21 2021-07-20 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium
CN116188797B (en) * 2022-12-09 2024-03-26 齐鲁工业大学 Scene light source color estimation method capable of being effectively embedded into image signal processor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258334A (en) * 2013-05-08 2013-08-21 电子科技大学 Method of estimating scene light source colors of color image
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead
CN103258334A (en) * 2013-05-08 2013-08-21 电子科技大学 Method of estimating scene light source colors of color image

Also Published As

Publication number Publication date
CN106295679A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
Abdelhamed et al. A high-quality denoising dataset for smartphone cameras
CN107103613B (en) A kind of three-dimension gesture Attitude estimation method
CN103916603B (en) Backlighting detecting and equipment
CN104504722B (en) Method for correcting image colors through gray points
US11967040B2 (en) Information processing apparatus, control method thereof, imaging device, and storage medium
CN103974053B (en) A kind of Automatic white balance antidote extracted based on ash point
WO2015074521A1 (en) Devices and methods for positioning based on image detection
CN109791695A (en) Motion vector image block based determines described piece of variance
CN106295679B (en) A kind of color image light source colour estimation method based on category correction
CN114581318A (en) Method and system for image enhancement with low illumination
CN109255390A (en) Preprocess method and module, discriminator, the readable storage medium storing program for executing of training image
CN105493105B (en) key point identification
WO2008056140A2 (en) Detecting illumination in images
CN110111341B (en) Image foreground acquisition method, device and device
CN112381751A (en) Online intelligent detection system and method based on image processing algorithm
CN106296658B (en) A kind of scene light source estimation accuracy method for improving based on camera response function
CN106204500A (en) A kind of different cameral that realizes shoots the method that Same Scene color of image keeps constant
CN105844260A (en) Multifunctional smart cleaning robot apparatus
CN113034449A (en) Target detection model training method and device and communication equipment
JP2022150562A (en) Image processing device, image processing method and program
Zhang et al. A combined approach to single-camera-based lane detection in driverless navigation
JP6897100B2 (en) Judgment device, judgment method, and judgment program
CN110136105A (en) A Variance and Smoothness-Based Sharpness Evaluation Method for the Same Content Image
CN112288748A (en) A semantic segmentation network training, image semantic segmentation method and device
CN113052909B (en) Image photometric calibration method, device and computer-readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant