CN107330870A - A kind of thick fog minimizing technology accurately estimated based on scene light radiation - Google Patents
A kind of thick fog minimizing technology accurately estimated based on scene light radiation Download PDFInfo
- Publication number
- CN107330870A CN107330870A CN201710509774.0A CN201710509774A CN107330870A CN 107330870 A CN107330870 A CN 107330870A CN 201710509774 A CN201710509774 A CN 201710509774A CN 107330870 A CN107330870 A CN 107330870A
- Authority
- CN
- China
- Prior art keywords
- light radiation
- scene light
- image
- mrow
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005855 radiation Effects 0.000 title claims abstract description 60
- 238000005516 engineering process Methods 0.000 title description 2
- 238000000034 method Methods 0.000 claims abstract description 70
- 238000002834 transmittance Methods 0.000 claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 9
- 230000000694 effects Effects 0.000 claims abstract description 8
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 2
- 239000002245 particle Substances 0.000 description 6
- 230000007423 decrease Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 2
- 239000000443 aerosol Substances 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于场景光辐射准确估计的浓雾去除方法,该方法主要包括:对雾天图像的3个通道分别进行最大值滤波,获取场景光辐射的初始值(S1);将雾天图像每个通道分别与对应通道的初始场景光辐射进行联合滤波,获得场景光辐射的精确估计(S2);雾天图像的每个通道分别除以对应的场景光辐射分量,获得消除场景光辐射衰减影响的雾天图像(S3);将消除场景光辐射衰减影响的图像投射到球形坐标系,并按照角度大小将像素聚类,然后利用haze‑line方法计算每个像素点的透射率(S4);用获得的透射率和雾天成像模型求取去雾后的图像(S5)。本发明方法处理后的浓雾图像亮度适合人眼观察,且细节清晰。The invention discloses a method for removing dense fog based on accurate estimation of scene light radiation. The method mainly includes: performing maximum value filtering on three channels of a fog image to obtain an initial value (S1) of scene light radiation; Each channel of the sky image is jointly filtered with the initial scene light radiation of the corresponding channel to obtain an accurate estimate of the scene light radiation (S2); each channel of the foggy sky image is divided by the corresponding scene light radiation component to obtain the scene light radiation Foggy image (S3) affected by radiation attenuation; project the image that eliminates the effect of scene light radiation attenuation to a spherical coordinate system, and cluster the pixels according to the angle, and then use the haze-line method to calculate the transmittance of each pixel ( S4); use the obtained transmittance and fog imaging model to obtain the image after defogging (S5). The brightness of the dense fog image processed by the method of the invention is suitable for observation by human eyes, and the details are clear.
Description
技术领域technical field
本发明涉及一种图像增强方法,尤其涉及一种基于场景光辐射准确估计的浓雾去除方法,属于数字图像处理技术领域。The invention relates to an image enhancement method, in particular to a dense fog removal method based on accurate estimation of scene light radiation, and belongs to the technical field of digital image processing.
背景技术Background technique
图像中雾气的存在使得画面能见度大大下降,人们无法从中获取准确的信息,进而对周围环境做出错误的判断,严重的甚至会导致灾难的发生。尤其,浓雾的天气条件下,能见度极度下降、图像信息大量缺失,直接影响了安全监控系统发挥其功效。由于室外监控对图像质量的要求,对浓雾图像去雾技术进行深入的研究,已经成为图像清晰化迫切需要解决的问题。The existence of fog in the image makes the visibility of the image greatly reduced, and people cannot obtain accurate information from it, and then make wrong judgments about the surrounding environment, which may even lead to disasters. In particular, under the weather conditions of dense fog, the visibility is extremely reduced and a large amount of image information is missing, which directly affects the effectiveness of the security monitoring system. Due to the image quality requirements of outdoor monitoring, in-depth research on fog image defogging technology has become an urgent problem to be solved for image clarity.
图像去雾算法主要分为两大类:一类是基于非物理模型的方法,另一类是基于物理模型的方法。这两类方法的区别在于是否利用雾天成像模型。Image defogging algorithms are mainly divided into two categories: one is based on non-physical models, and the other is based on physical models. The difference between these two methods lies in whether to use the fog imaging model.
基于非物理模型的去雾方法没有从图像降质的物理成因入手,而是增强图像的对比度和校正图像的颜色,根据视觉感受改善图像的质量。比较典型的雾天图像增强方法包括直方图均衡化算法、小波方法和曲波变换、自动颜色均衡化算法。该类算法不能实现真正意义上的去雾,只能在一定程度上改善图像视觉效果,容易去雾不彻底、出现色彩失真的现象。The non-physical model-based defogging method does not start with the physical causes of image degradation, but enhances the contrast of the image and corrects the color of the image to improve the quality of the image according to the visual experience. The typical fog image enhancement methods include histogram equalization algorithm, wavelet method and curvelet transform, automatic color equalization algorithm. This type of algorithm cannot achieve defog in the true sense, but can only improve the visual effect of the image to a certain extent, and it is easy to cause incomplete defog and color distortion.
基于物理模型的去雾方法实质上是从经典的大气散射模型出发,通过求解模型中的相关参数,获取场景反射率或无雾图像。目前,基于物理模型的雾天图像恢复方法主要包括基于偏振特性的方法、基于偏微分方程的方法、基于深度信息的方法、和基于先验知识或假设的方法。这些方法认为场景光辐射是充足的,只考虑地表悬浮粒子的散射作用产生的降质,可以较好的处理薄雾图像。The dehazing method based on the physical model essentially starts from the classic atmospheric scattering model, and obtains the scene reflectance or fog-free image by solving the relevant parameters in the model. At present, the fog image restoration methods based on physical models mainly include methods based on polarization characteristics, methods based on partial differential equations, methods based on depth information, and methods based on prior knowledge or assumptions. These methods consider that the light radiation of the scene is sufficient, and only consider the degradation caused by the scattering of suspended particles on the ground surface, which can better deal with haze images.
发明内容Contents of the invention
然而,当用薄雾处理方法进行浓雾图像的处理时,图像会出现整体发暗或者偏色,以及细节丢失等现象。这是因为浓雾条件下,气溶胶粒子在地表聚集,当雾气的光学厚度逐渐增大时,可见光的透射率逐渐变低,从而使得到达地面的光辐射能量随之逐渐减小。另外,浓雾条件下,气象也更为复杂,可能会伴随着大气层增厚的情况。在接近地表的低空区域,气胶溶悬浮颗粒大小大于波长,其对不同波长光的衰减系数相同。但是,随着高度的增加,地球引力减小,悬浮颗粒的直径也逐渐缩小。在大气层中,分布着许多直径小于可见光波长的悬浮颗粒,这些颗粒会发生瑞利散射。瑞利散射使得波长较短的光在传播过程中被散射消散,只有波长较长的光穿过大气层到达地面。所以,浓雾条件下地表光辐射可能出现彩色颜色渲染,雾气看上去不再是纯白色。However, when using the haze processing method to process the dense fog image, the image will appear overall dark or color cast, and the details will be lost. This is because under dense fog conditions, aerosol particles gather on the surface. When the optical thickness of the fog gradually increases, the transmittance of visible light gradually decreases, so that the light radiation energy reaching the ground gradually decreases. In addition, under dense fog conditions, the weather is more complicated, which may be accompanied by thickening of the atmosphere. In the low-altitude area close to the surface, the size of aerosol suspended particles is larger than the wavelength, and their attenuation coefficients for different wavelengths of light are the same. However, as the height increases, the gravity of the earth decreases, and the diameter of the suspended particles gradually decreases. In the atmosphere, there are many suspended particles whose diameter is smaller than the wavelength of visible light, and these particles will undergo Rayleigh scattering. Rayleigh scattering causes light with shorter wavelengths to be scattered and dissipated during propagation, and only light with longer wavelengths passes through the atmosphere to reach the ground. Therefore, the light radiation on the ground surface may appear colorful color rendering under dense fog conditions, and the fog is no longer pure white.
为了更加有效的去除浓雾,研究者提出先进行图像亮度的调整,然后利用暗原色先验知识进行雾气的消除。这种方法能够改善处理后图像的亮度,但是由于所用方法场景光辐射估计不准确,图像的细节会不清晰。此外,由于调整亮度后的图像可能会被消除一些阴影的区域,使得图像不再完全符合暗原色先验的原理。若依然采用暗原色先验知识估计散射光相关参数,图像层次感差,而且容易凸显噪声。In order to remove the dense fog more effectively, the researchers propose to adjust the image brightness first, and then use the prior knowledge of the dark channel to eliminate the fog. This method can improve the brightness of the processed image, but due to the inaccurate estimation of scene light radiation by the method used, the details of the image will be unclear. In addition, since the image after adjusting the brightness may be eliminated some shadow areas, the image no longer fully conforms to the principle of the dark channel prior. If the prior knowledge of the dark channel color is still used to estimate the parameters related to the scattered light, the image layering will be poor, and the noise will be easily highlighted.
在此背景下,研究一种既能保持增强后图像的亮度和细节,又能有效去除图像雾气的浓雾图像增强方法显得尤为重要。In this context, it is particularly important to study a dense fog image enhancement method that can not only maintain the brightness and details of the enhanced image, but also effectively remove the image fog.
本发明所要解决的技术问题在于提供一种基于场景光辐射准确估计的浓雾去除方法,实现单幅图像浓雾去除。The technical problem to be solved by the present invention is to provide a method for removing dense fog based on accurate estimation of scene light radiation, so as to realize removing dense fog in a single image.
为实现上述的目的,本发明提供了一种基于场景光辐射准确估计的浓雾去除方法,包括如下步骤:In order to achieve the above object, the present invention provides a method for removing dense fog based on accurate estimation of scene light radiation, comprising the following steps:
(1)对雾天图像I的RGB三个颜色通道进行分解,得到三个颜色通道分量图Ir,Ig,Ib,采用最大值滤波分别对三个颜色通道分量图Ir,Ig,Ib进行计算获得场景光辐射的粗糙估计图Mr,Mg,Mb;(1) Decompose the RGB three color channels of the foggy image I to obtain the three color channel component maps I r , I g , I b , and use the maximum value filter to respectively analyze the three color channel component maps I r , I g , I b perform calculations to obtain the rough estimation map of scene light radiation M r , M g , M b ;
(2)采用联合保边缘滤波对Ir和Mr,Ig和Mg,Ib和Mb三组图像分别进行计算,得到场景光辐射的精确估计图Lr,Lg,Lb;(2) Use the joint edge-preserving filter to calculate the three groups of images I r and M r , I g and M g , I b and M b respectively, and obtain the accurate estimation map of the scene light radiation L r , L g , L b ;
(3)将三个颜色通道分量图Ir,Ig,Ib分别除以所对应的场景光辐射精确估计图Lr,Lg,Lb,并对三个颜色通道所计算得到的结果进行合成,得到消除场景光辐射衰减影响的雾天图像J;(3) Divide the three color channel component maps I r , I g , I b by the corresponding scene light radiation accurate estimation map L r , L g , L b , and calculate the results of the three color channels Synthesize to obtain a foggy image J that eliminates the influence of scene light radiation attenuation;
(4)将消除场景光辐射衰减影响的雾天图像J中的每个像素值在球形坐标系投影,得到(r(x,y),θ(x,y),φ(x,y))坐标,其中,(x,y)表示该像素点的坐标,r(x,y)表示该点到原点的距离即||J(x,y)-1||,θ(x,y)和φ(x,y)分别表示对应的经纬度,然后按经纬度大小将像素聚类,得到n个分类后的雾线(P1,P2,…,Pn),再采用haze line方法求取每个像素的透射率;(4) Project each pixel value in the foggy image J that eliminates the effect of scene light radiation attenuation on the spherical coordinate system to obtain (r(x,y),θ(x,y),φ(x,y)) Coordinates, where (x, y) represents the coordinates of the pixel point, r(x, y) represents the distance from the point to the origin, that is ||J(x, y)-1||, θ(x, y) and φ(x,y) represent the corresponding latitude and longitude respectively, and then cluster the pixels according to the latitude and longitude to obtain n classified fog lines (P 1 ,P 2 ,…,P n ), and then use the haze line method to obtain each The transmittance of a pixel;
(5)利用步骤(4)获取的透射率,计算场景的反射率,进而获取去雾后的图像。(5) Using the transmittance obtained in step (4), calculate the reflectance of the scene, and then obtain the image after dehazing.
如上所述的一种基于场景光辐射准确估计的浓雾去除方法,其特征在于所述步骤(1)中,最大值滤波的半径为Radius,所用公式如下:A method for removing dense fog based on accurate estimation of scene light radiation as described above is characterized in that in the step (1), the radius of the maximum value filter is Radius, and the formula used is as follows:
Radius=|(max(height,width)/100|+1 (1)Radius=|(max(height, width)/100|+1 (1)
其中height和width表示图像的长和宽。Where height and width represent the length and width of the image.
如上所述的一种基于场景光辐射准确估计的浓雾去除方法,其特征在于所述步骤(2)中,联合滤波所用公式如下所示:A method for removing dense fog based on accurate estimation of scene light radiation as described above, characterized in that in the step (2), the formula used for joint filtering is as follows:
其中(x,y),(i,j)为坐标信息,Ω是以(x,y)为中心的块,Mc为每个通道进行极大值滤波后获取的场景光辐射的粗糙估计图。Where (x, y), (i, j) is the coordinate information, Ω is the block centered on (x, y), and M c is the rough estimation map of the scene light radiation obtained after the maximum value filtering of each channel .
ηc(i,j)为阶跃函数:η c (i,j) is a step function:
为指数函数: is an exponential function:
其中,Ic(x,y)表示雾天图像的c通道,σ表示像素值方差。Among them, I c (x, y) represents the c channel of the fog image, and σ represents the variance of pixel values.
如上所述的一种基于场景光辐射准确估计的浓雾去除方法,其特征在于所述步骤(3)中,为了避免所获得环境光照等于0的情况,在除以环境光照的时候,需将步骤(2)中获取的环境光照分量分别加一个非常小的常量ε=0.01。A method for removing dense fog based on accurate estimation of scene light radiation as described above is characterized in that in step (3), in order to avoid the situation that the obtained ambient light is equal to 0, when dividing by the ambient light, it is necessary to divide Add a very small constant ε=0.01 to the ambient light components obtained in step (2).
如上所述的一种基于场景光辐射准确估计的浓雾去除方法,其特征在于所述步骤(4)中,利用k-mean算法对投影到球形坐标系的像素进行聚类。The method for removing dense fog based on accurate estimation of scene light radiation as described above is characterized in that in the step (4), the k-mean algorithm is used to cluster the pixels projected to the spherical coordinate system.
如上所述的一种基于场景光辐射准确估计的浓雾去除方法,其特征在于所述的聚类方法所使用聚类中心个数的范围设定为200-500之间。The method for removing dense fog based on accurate estimation of scene light radiation as described above is characterized in that the number of cluster centers used in the clustering method is set within a range of 200-500.
附图说明Description of drawings
下面结合附图和具体实施方式对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
图1为本发明所述的一种基于场景光辐射准确估计的浓雾去除方法流程图。FIG. 1 is a flowchart of a method for removing dense fog based on accurate estimation of scene light radiation according to the present invention.
图2(a)至图2(c)为本发明所述联合滤波方法的3-D效果展示图。FIG. 2( a ) to FIG. 2( c ) are diagrams showing the 3-D effect of the joint filtering method of the present invention.
图3(a)至图3(f)为根据本发明的去雾方法与现有典型去雾方法在测试图像上的实验结果对比;其中,图3(a)表示雾天图像,图3(b)表示限制对比度的自适应直方图均衡化方法去雾后结果,图3(c)表示Tarel方法去雾后结果,图3(d)表示联合暗原色和引导滤波方法去雾后结果,图3(e)表示联合暗原色和后处理方法去雾后结果,图3(f)表示本发明方法去雾后图像。Fig. 3 (a) to Fig. 3 (f) are according to the defogging method of the present invention and the experimental result comparison of existing typical defogging method on the test image; Wherein, Fig. 3 (a) represents foggy day image, Fig. 3 ( b) shows the result of haze removal by the adaptive histogram equalization method with limited contrast, Fig. 3(c) shows the result of Tarel method after haze removal, Fig. 3(d) shows the result of combined dark channel and guided filter method after haze removal, Fig. 3(e) shows the result after combined dark primary color and post-processing method, and Fig. 3(f) shows the image after the method of the present invention.
具体实施方式detailed description
本发明所提供的基于场景光辐射准确估计的浓雾去除方法,首先通过一种保边缘的滤波器对场景光辐射进行估计、消除场景光辐射衰减的影响。然后,利用像素球形坐标系的性质计算场景的反射率,获取去雾后的图像。下面对此展开详细的说明。The dense fog removal method based on the accurate estimation of scene light radiation provided by the present invention first estimates scene light radiation through an edge-preserving filter to eliminate the influence of scene light radiation attenuation. Then, the reflectance of the scene is calculated by using the properties of the pixel spherical coordinate system, and the image after dehazing is obtained. This is described in detail below.
本发明中,基于场景光辐射准确估计的浓雾去除方法包括如下几个步骤,如图1所示:In the present invention, the dense fog removal method based on accurate estimation of scene light radiation includes the following steps, as shown in Figure 1:
步骤1.对雾天图像I的r,g,b三个颜色通道进行分解,得到三个颜色通道分量图Ir,Ig,Ib,采用最大值滤波分别对三个颜色通道分量图Ir,Ig,Ib进行计算获得场景光辐射的粗糙估计图Mr,Mg,Mb;Step 1. Decompose the r, g, and b three color channels of the foggy image I to obtain three color channel component maps I r , I g , I b , and use the maximum value filter to respectively analyze the three color channel component maps I r , I g , I b are calculated to obtain a rough estimation map of the scene light radiation M r , M g , M b ;
保持自然度的场景光辐射估计需满足两个条件:(a)光辐射在大部分区域应该是平滑的但应保留明暗交替边缘;(b)场景光辐射应不小于反射光,从而保证去雾后的图像尽可能多的保留场景细节。The estimate of scene light radiation that maintains naturalness needs to meet two conditions: (a) light radiation should be smooth in most areas but should retain alternating edges of light and dark; (b) scene light radiation should not be smaller than reflected light, so as to ensure dehazing The resulting image retains as much scene detail as possible.
在Retinex理论中,许多中心环绕方法通过对图像最大通道低通滤波获取场景光辐射。但是,对于场景光辐射偏色的情况,这种方法并不适用。而且,最大通道只是场景光辐射的一个下限值,用它作为光照的初始估计值缺乏物理解释。In Retinex theory, many center-surround methods obtain scene radiance by low-pass filtering the maximum channel of the image. However, this method is not suitable for the case where the scene light radiation is color cast. Moreover, the maximum channel is only a lower limit of the scene light radiation, and using it as an initial estimate of lighting lacks a physical interpretation.
在以往文献中,基于图像中的高亮度区域是白色表面或者光源高亮度点的假设,研究者提出Max-RGB算法,将3个通道的最大值作为光照的估计。但是,这种方法不适合于光照不均匀的情况。为了使得估计方法具有鲁棒性,本发明将Max-RGB方法推广到局部区域,换句话说,本发明认为每个局部区域反射率较高的物体所反射的光更接近于环境光照。假设I为观察到的图像,则场景光辐射的粗糙估计为:In the previous literature, based on the assumption that the high-brightness area in the image is a white surface or a high-brightness point of the light source, the researchers proposed the Max-RGB algorithm, which used the maximum value of the three channels as an estimate of the illumination. However, this method is not suitable for uneven lighting conditions. In order to make the estimation method robust, the present invention extends the Max-RGB method to local areas. In other words, the present invention considers that the light reflected by objects with higher reflectance in each local area is closer to the ambient light. Assuming that I is the observed image, the rough estimate of the scene light radiation is:
其中Ω表示以(x,y)为中心的局部窗口,c表示颜色通道。where Ω represents a local window centered at (x, y), and c represents a color channel.
步骤2.采用联合保边缘滤波方法对Ir和Mr,Ig和Mg,Ib和Mb三组图像分别进行计算,得到场景光辐射的精确估计图Lr,Lg,Lb;Step 2. Use the joint edge-preserving filtering method to calculate the three groups of images I r and M r , I g and M g , I b and M b respectively, and obtain the accurate estimation map L r , L g , L of the scene light radiation b ;
步骤1中的方法确实能够比较好的逼近场景光辐射。然而,与其他假设局部恒常的方法相似,在场景光辐射明暗交界的地方,这种方法估计的光照会存在块效应,因此,需要对它进行进一步的优化。为此,本发明设计了一个内容自适应的联合保边缘滤波来估计精确的场景光辐射Lc(x,y):The method in step 1 can indeed better approximate the scene light radiation. However, similar to other methods that assume local invariance, the lighting estimated by this method will have block effects at the intersection of light and dark in the scene, so it needs to be further optimized. For this reason, the present invention designs a content-adaptive joint edge-preserving filter to estimate accurate scene light radiation L c (x, y):
表示指数函数,用来控制光辐射满足条件(a): Represents an exponential function, which is used to control the optical radiation to satisfy the condition (a):
其中ηc(i,j)为阶跃函数,用来控制场景光辐射满足条件(b),定义如下:Among them, η c (i, j) is a step function, which is used to control the scene light radiation to meet the condition (b), defined as follows:
其中σ为像素灰度差。Where σ is the pixel gray level difference.
与其他联合保边缘滤波不同,该方法并不是将引导图的邻里关系赋予待滤波图进行平滑。当每个窗口滤波的时候,引导图中只有一个像素起到了引导的作用。Unlike other joint edge-preserving filters, this method does not assign the neighborhood relationship of the guide map to the image to be filtered for smoothing. When each window is filtered, only one pixel in the guide map serves as a guide.
步骤3.将三个颜色通道分量图Ir,Ig,Ib分别除以所对应的场景光辐射精确估计图Lr,Lg,Lb,并对三个颜色通道所计算得到的结果进行合成,得到消除场景光辐射衰减影响的雾天图像J;Step 3. Divide the three color channel component maps I r , I g , and I b by the corresponding scene light radiation accurate estimation map L r , L g , L b , and calculate the results of the three color channels Synthesize to obtain a foggy image J that eliminates the influence of scene light radiation attenuation;
利用场景光辐射的精确估计,消除场景光辐射衰减影响的可通过以下公式获取:Using the accurate estimation of scene light radiation, the effect of eliminating scene light radiation attenuation can be obtained by the following formula:
Jc(x,y)=Ic(x,y)/Lc(x,y) (5)J c (x, y) = I c (x, y)/L c (x, y) (5)
步骤4.将消除场景光辐射衰减影响的雾天图像J中的每个像素值在球形坐标系投影,得到(r(x,y),θ(x,y),φ(x,y))坐标,其中,(x,y)表示该像素点的坐标,r(x,y)表示该点到原点的距离即||J(x,y)-1||,θ(x,y)和φ(x,y)分别表示对应的经纬度,然后按经纬度大小将像素聚类,得到n个分类后的雾线(P1,P2,…,Pn),再采用haze line方法求取每个像素的透射率;Step 4. Project each pixel value in the foggy image J that eliminates the influence of scene light radiation attenuation on a spherical coordinate system to obtain (r(x,y), θ(x,y), φ(x,y)) Coordinates, where (x, y) represents the coordinates of the pixel point, r(x, y) represents the distance from the point to the origin, that is ||J(x, y)-1||, θ(x, y) and φ(x,y) represent the corresponding latitude and longitude respectively, and then cluster the pixels according to the latitude and longitude to obtain n classified fog lines (P 1 ,P 2 ,…,P n ), and then use the haze line method to obtain each The transmittance of a pixel;
消除了场景光辐射的衰减影响之后,场景反射率的获取转化为从以下公式中获取透射率的问题:After eliminating the attenuation effect of the scene light radiation, the acquisition of the scene reflectance is transformed into the problem of obtaining the transmittance from the following formula:
J(x,y)=R(x,y)t(x,y)+1-t(x,y) (6)J(x,y)=R(x,y)t(x,y)+1-t(x,y) (6)
将上述公式变形后:After transforming the above formula:
J(x,y)-1=(R(x,y)-1)t(x,y) (7)J(x,y)-1=(R(x,y)-1)t(x,y) (7)
将J(x,y)-1在球形坐标系中表示后为:Express J(x,y)-1 in the spherical coordinate system as:
J(x,y)-1=[r(x,y),θ(x,y),φ(x,y)] (8)J(x,y)-1=[r(x,y),θ(x,y),φ(x,y)] (8)
其中,r表示该点到原点的距离,即||J(x,y)-1||,θ和φ分别表示经纬度。对于雾天的图像相同颜色的像素值来说具有不同t,在该球形坐标系中表现为:这些像素具有离原点具有不同的半径,却具有相同的经纬度θ和φ。基于这种原理,通过聚类的方法,设置好聚类中心个数后,便可以获得数条线,线上包含相似颜色的像素值。Among them, r represents the distance from the point to the origin, that is, ||J(x,y)-1||, θ and φ represent the latitude and longitude respectively. For the pixel values of the same color in the foggy image, there are different t, in the spherical coordinate system, these pixels have different radii from the origin, but have the same latitude and longitude θ and φ. Based on this principle, through the clustering method, after setting the number of cluster centers, several lines can be obtained, and the lines contain pixel values of similar colors.
然后,将每个聚类簇中最远像素的透射率记为1,根据投射的球形坐标半径与最远像素投影坐标的半径比例求取分布在每条线上其他像素的透射率;Then, the transmittance of the farthest pixel in each cluster is recorded as 1, and the transmittance of other pixels distributed on each line is calculated according to the ratio of the radius of the projected spherical coordinate to the radius of the projection coordinate of the farthest pixel;
给定一个点r(x,y)可表示为:Given a point r(x,y) can be expressed as:
r(x,y)=t(x,y)||J(x,y)-1||,0≤t(x,y)≤1 (9)r(x,y)=t(x,y)||J(x,y)-1||,0≤t(x,y)≤1 (9)
将最长的半径坐标设置为t=1。其中,P为一条聚类线。则每一点的透射率即可获取:the longest radius The coordinates are set to t=1. in, P is a clustering line. Then the transmittance of each point can be obtained:
步骤5.利用步骤(4)获取的透射率,计算场景的反射率,进而获取去雾后的图像。Step 5. Use the transmittance obtained in step (4) to calculate the reflectance of the scene, and then obtain the image after defogging.
通过下面的公式可以计算场景的反射率作为去雾后的图像:The reflectance of the scene can be calculated as the image after dehazing by the following formula:
图2(a)至图2(c)为本发明所述联合滤波方法的3-D效果展示图。FIG. 2( a ) to FIG. 2( c ) are diagrams showing the 3-D effect of the joint filtering method of the present invention.
图3(a)至图3(f)为根据本发明的去雾方法与现有典型去雾方法在测试图像上的实验结果对比;其中,图3(a)表示雾天图像,图3(b)表示限制对比度的自适应直方图均衡化方法去雾后结果,图3(c)表示Tarel方法去雾后结果,图3(d)表示联合暗原色和引导滤波方法去雾后结果,图3(e)表示联合暗原色和后处理方法去雾后结果,图3(f)表示本发明方法去雾后图像。Fig. 3 (a) to Fig. 3 (f) are according to the defogging method of the present invention and the experimental result comparison of existing typical defogging method on the test image; Wherein, Fig. 3 (a) represents foggy day image, Fig. 3 ( b) shows the result of haze removal by the adaptive histogram equalization method with limited contrast, Fig. 3(c) shows the result of Tarel method after haze removal, Fig. 3(d) shows the result of combined dark channel and guided filter method after haze removal, Fig. 3(e) shows the result after combined dark primary color and post-processing method, and Fig. 3(f) shows the image after the method of the present invention.
需要说明的是,以上公开的仅为本发明的具体实施实例。根据本发明所提供的技术思想,本领域的普通技术人员所能思及的变化应落入本发明的保护范围内。It should be noted that what is disclosed above are only specific implementation examples of the present invention. According to the technical idea provided by the present invention, the changes that those skilled in the art can think of should fall within the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710509774.0A CN107330870B (en) | 2017-06-28 | 2017-06-28 | A dense fog removal method based on accurate estimation of scene light radiation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710509774.0A CN107330870B (en) | 2017-06-28 | 2017-06-28 | A dense fog removal method based on accurate estimation of scene light radiation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107330870A true CN107330870A (en) | 2017-11-07 |
CN107330870B CN107330870B (en) | 2019-06-18 |
Family
ID=60198601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710509774.0A Active CN107330870B (en) | 2017-06-28 | 2017-06-28 | A dense fog removal method based on accurate estimation of scene light radiation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107330870B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447034A (en) * | 2018-03-13 | 2018-08-24 | 北京航空航天大学 | A kind of marine Misty Image defogging method decomposed based on illumination |
CN109345479A (en) * | 2018-09-28 | 2019-02-15 | 中国电子科技集团公司信息科学研究院 | A kind of real-time preprocess method and storage medium of video monitoring data |
CN110335210A (en) * | 2019-06-11 | 2019-10-15 | 长江勘测规划设计研究有限责任公司 | An underwater image restoration method |
CN111583125A (en) * | 2019-02-18 | 2020-08-25 | 佳能株式会社 | Image processing apparatus, image processing method, and computer-readable storage medium |
CN112907472A (en) * | 2021-02-09 | 2021-06-04 | 大连海事大学 | Polarization underwater image optimization method based on scene depth information |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102243758A (en) * | 2011-07-14 | 2011-11-16 | 浙江大学 | Fog-degraded image restoration and fusion based image defogging method |
CN104809707A (en) * | 2015-04-28 | 2015-07-29 | 西南科技大学 | Method for estimating visibility of single fog-degraded image |
CN105469372A (en) * | 2015-12-30 | 2016-04-06 | 广西师范大学 | Mean filtering-based fog-degraded image sharp processing method |
CN106846263A (en) * | 2016-12-28 | 2017-06-13 | 中国科学院长春光学精密机械与物理研究所 | The image defogging method being immunized based on fusion passage and to sky |
-
2017
- 2017-06-28 CN CN201710509774.0A patent/CN107330870B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102243758A (en) * | 2011-07-14 | 2011-11-16 | 浙江大学 | Fog-degraded image restoration and fusion based image defogging method |
CN104809707A (en) * | 2015-04-28 | 2015-07-29 | 西南科技大学 | Method for estimating visibility of single fog-degraded image |
CN105469372A (en) * | 2015-12-30 | 2016-04-06 | 广西师范大学 | Mean filtering-based fog-degraded image sharp processing method |
CN106846263A (en) * | 2016-12-28 | 2017-06-13 | 中国科学院长春光学精密机械与物理研究所 | The image defogging method being immunized based on fusion passage and to sky |
Non-Patent Citations (5)
Title |
---|
DANA BERMAN等: "《2017 IEEE International Conference on Computational Photography (ICCP)》", 14 May 2017 * |
QINGSONG ZHU等: ""A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior"", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
YUANYUAN GAO等: ""A fast image dehazing algorithm based on negative correction"", 《SIGNAL PROCESSING》 * |
张晶晶等: ""基于暗原色先验原理的偏振图像浓雾去除算法"", 《计算机应用》 * |
陆健强等: ""基于改进暗通道先验算法的农田视频实时去雾清晰化系统"", 《农业工程学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447034A (en) * | 2018-03-13 | 2018-08-24 | 北京航空航天大学 | A kind of marine Misty Image defogging method decomposed based on illumination |
CN108447034B (en) * | 2018-03-13 | 2021-08-13 | 北京航空航天大学 | A dehazing method for sea fog images based on illumination decomposition |
CN109345479A (en) * | 2018-09-28 | 2019-02-15 | 中国电子科技集团公司信息科学研究院 | A kind of real-time preprocess method and storage medium of video monitoring data |
CN109345479B (en) * | 2018-09-28 | 2021-04-06 | 中国电子科技集团公司信息科学研究院 | Real-time preprocessing method and storage medium for video monitoring data |
CN111583125A (en) * | 2019-02-18 | 2020-08-25 | 佳能株式会社 | Image processing apparatus, image processing method, and computer-readable storage medium |
CN111583125B (en) * | 2019-02-18 | 2023-10-13 | 佳能株式会社 | Image processing apparatus, image processing method, and computer-readable storage medium |
US11995799B2 (en) | 2019-02-18 | 2024-05-28 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium |
CN110335210A (en) * | 2019-06-11 | 2019-10-15 | 长江勘测规划设计研究有限责任公司 | An underwater image restoration method |
CN110335210B (en) * | 2019-06-11 | 2022-05-13 | 长江勘测规划设计研究有限责任公司 | Underwater image restoration method |
CN112907472A (en) * | 2021-02-09 | 2021-06-04 | 大连海事大学 | Polarization underwater image optimization method based on scene depth information |
Also Published As
Publication number | Publication date |
---|---|
CN107330870B (en) | 2019-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106530246B (en) | Image defogging method and system based on dark Yu non local priori | |
CN103020920B (en) | Method for enhancing low-illumination images | |
CN111292258B (en) | Image defogging method based on dark channel prior and bright channel prior | |
Tripathi et al. | Single image fog removal using bilateral filter | |
CN104537615B (en) | A kind of local Retinex Enhancement Methods based on HSV color spaces | |
CN102930514B (en) | Rapid image defogging method based on atmospheric physical scattering model | |
CN107330870B (en) | A dense fog removal method based on accurate estimation of scene light radiation | |
CN106910175A (en) | A kind of single image defogging algorithm based on deep learning | |
CN103955905A (en) | Rapid wavelet transformation and weighted image fusion single-image defogging method | |
TWI489416B (en) | Image recovery method | |
CN108537756A (en) | Single image to the fog method based on image co-registration | |
CN108389175A (en) | Merge the image defogging method of variogram and color decaying priori | |
CN105931208A (en) | Physical model-based low-illuminance image enhancement algorithm | |
CN106157270A (en) | A kind of single image rapid defogging method and system | |
CN109087254A (en) | Unmanned plane image haze sky and white area adaptive processing method | |
CN114219732A (en) | Image defogging method and system based on sky region segmentation and transmissivity refinement | |
CN108765323A (en) | A kind of flexible defogging method based on improvement dark and image co-registration | |
CN106447617A (en) | Improved Retinex image defogging method | |
CN111563852A (en) | Dark channel prior defogging method based on low-complexity MF | |
CN104318528A (en) | Foggy weather image restoration method based on multi-scale WLS filtering | |
CN107563980A (en) | Underwater picture clarification method based on Underwater Imaging model and the depth of field | |
CN113034379A (en) | Weather-time self-adaptive rapid image sharpening processing method | |
CN106709876B (en) | Optical remote sensing image defogging method based on dark image element principle | |
CN115034997A (en) | Image processing method and device | |
CN109191405B (en) | Aerial image defogging algorithm based on transmittance global estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |