CN116957984A - Method and system for monitoring unmanned aerial vehicle based on low-illuminance haze - Google Patents
Method and system for monitoring unmanned aerial vehicle based on low-illuminance haze Download PDFInfo
- Publication number
- CN116957984A CN116957984A CN202311001014.0A CN202311001014A CN116957984A CN 116957984 A CN116957984 A CN 116957984A CN 202311001014 A CN202311001014 A CN 202311001014A CN 116957984 A CN116957984 A CN 116957984A
- Authority
- CN
- China
- Prior art keywords
- light source
- input image
- transmittance
- pixel
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000012544 monitoring process Methods 0.000 title claims abstract description 33
- 238000002834 transmittance Methods 0.000 claims abstract description 101
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 238000005286 illumination Methods 0.000 claims abstract description 50
- 238000001914 filtration Methods 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 26
- 230000007246 mechanism Effects 0.000 claims abstract description 23
- 238000012937 correction Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract 3
- 230000008569 process Effects 0.000 claims description 36
- 238000004364 calculation method Methods 0.000 claims description 22
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 18
- 230000004075 alteration Effects 0.000 abstract description 5
- 239000003086 colorant Substances 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 8
- 125000001475 halogen functional group Chemical group 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004313 glare Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域Technical field
本发明涉及图像处理技术领域,具体涉及一种基于低照度雾霾下监测无人机方法及系统。The present invention relates to the field of image processing technology, and specifically relates to a method and system for monitoring drones under low illumination haze.
背景技术Background technique
随着科技的发展,近年来,人们对无人机的关注越来越高。除了在军用方向的应用,民用领域下,无人机的应用场景在不断的拓展。如:娱乐航拍、农业植保、警务安防、电力巡检、无人机测绘、物流运输、编排表演等领域,无人机都有不俗表现。然而,无人机技术在带来好处的同时也存在一些弊端。对于航空部门来说,存在着发现难、监管难、执法难等方面的问题。在飞机起飞和降落的这段时间里,如果未能及时识别出无人机的行动,观测到无人身上可能所携带的一些危险物品,容易造成飞机的坠毁,威胁到人的生命安全以及利益损失,而管控无人机的前提是能够有效的监测无人机。With the development of science and technology, people have paid more and more attention to drones in recent years. In addition to military applications, the application scenarios of drones are constantly expanding in the civilian field. For example, in the fields of entertainment aerial photography, agricultural plant protection, police security, power inspection, drone surveying and mapping, logistics and transportation, choreography and performances, drones have performed well. However, while drone technology brings benefits, it also has some drawbacks. For the aviation sector, there are problems such as difficulty in discovery, supervision, and law enforcement. During the period of take-off and landing of the aircraft, if the actions of the drone are not recognized in time and some dangerous items that may be carried by the drone are observed, it may easily cause the aircraft to crash and threaten people's lives and interests. Loss, and the prerequisite for controlling drones is to be able to effectively monitor drones.
对于无人机的工作环境而言。在白天场景下,采集图像设备容易捕捉到无人机,监测到无人机的行动,识别出自身所携带的危险物品。但是,当无人机处于低照度并伴随着雾霾的环境下,此时加大了监测的难度。目前的低照度去雾方法,还存在不少缺陷。均存在着色彩饱和度差、纹理细节模糊以及噪声大等问题。现有技术针对夜间光照不均问题部分研究人员提出先照明补偿,再颜色校正的去雾算法;虽然颜色看起来效果提升,但是由于补偿时光照估计的不准确,对图中闪耀区域处理得不够好,导致复原后图像光晕明显,噪声大;一些研究人员提出用光照补偿实现去雾,去雾后再颜色校正,但未能合理估计出透射,导致最终复原的图像颜色失真,去雾效果较差。另外,部分研究人员认为夜间人造光源存在着闪耀、光照不均等现象,于是把闪耀层加入标准的白天去雾模型中,去掉闪耀层获得层分离结果,然后重新分块估计夜间大气光,通过暗通道理论估计透射率,进而得到复原图;上述方法虽然去雾效果较好,但由于没有光照补偿、亮度增强等处理,复原后的图像整体偏暗,纹理细节不够清楚;并且图像经过去雾后,显示过暗,造成细节缺失。For the working environment of drones. In daytime scenes, image collection equipment can easily capture drones, monitor their movements, and identify dangerous items they carry. However, when the drone is in an environment with low illumination and accompanied by haze, it becomes more difficult to monitor. The current low-illumination defogging method still has many shortcomings. There are problems such as poor color saturation, blurred texture details, and high noise. Existing Technology: To address the problem of uneven illumination at night, some researchers have proposed a dehazing algorithm that first compensates for illumination and then corrects the color. Although the color effect seems to be improved, due to the inaccurate illumination estimation during compensation, the shining area in the picture is not processed enough. Good, resulting in obvious halo and large noise in the restored image; some researchers proposed to use illumination compensation to achieve dehazing, and then color correction after dehazing, but failed to reasonably estimate the transmission, resulting in color distortion of the final restored image, and the dehazing effect Poor. In addition, some researchers believe that artificial light sources at night have phenomena such as glare and uneven illumination, so they add the glare layer to the standard daytime defogging model, remove the glare layer to obtain the layer separation results, and then re-block the nighttime atmospheric light to estimate the nighttime atmospheric light, using dark Channel theory estimates the transmittance and then obtains the restored image; although the above method has a better dehazing effect, due to the lack of illumination compensation, brightness enhancement and other processing, the restored image is overall dark and the texture details are not clear enough; and after the image is dehazed , the display is too dark, resulting in a loss of details.
部分公开的专利申请文件中,也对监测无人机提出一些方案,例如专利申请CN108734670A和专利申请CN115170404A。上述公开方案虽然对低照度去雾方法做出了改善,但是应用太过宽泛,无法对低照度雾霾情况下的无人机图像有针对性的处理。例如,未考虑无人机切实的工作状态;无人机在低照度情况下工作时,身上带有灯光,这些灯光会对去雾的图像造成影响,容易产生光晕效应。另外,在含有大面积天空区域时,去雾后,天空区域会出现偏色以及伪影等严重影响,导致图像去雾效果不好,很难更好的观测无人机。在专利申请CN115616479A公开利用特殊装置和系统对无人机进行了监控,但无法直接了解到无人机自身所携带的危险物品,而这些危险物品可能带来重大的事故发生。Some published patent application documents also propose some solutions for monitoring drones, such as patent application CN108734670A and patent application CN115170404A. Although the above-mentioned public scheme has improved the low-illumination dehazing method, its application is too broad and cannot be used to process drone images in low-illumination haze situations. For example, the actual working status of the drone is not considered; when the drone is working under low illumination conditions, it is equipped with lights. These lights will affect the dehazed image and easily produce a halo effect. In addition, when there is a large sky area, serious effects such as color cast and artifacts will occur in the sky area after dehazing, resulting in poor image dehazing effect and making it difficult to better observe the drone. The patent application CN115616479A discloses the use of special devices and systems to monitor drones, but it is impossible to directly understand the dangerous items carried by the drone itself, and these dangerous items may cause major accidents.
因此,基于当下无人机在特殊场景中监测存在的技术问题,需要一种能够应用于特殊场景的低照度去雾方法,进而实现对无人机更好的监测,识别出无人机所携带的危险物品。Therefore, based on the current technical problems of drone monitoring in special scenes, a low-light defogging method that can be applied to special scenes is needed to achieve better monitoring of drones and identify the objects carried by drones. of dangerous goods.
发明内容Contents of the invention
本发明目的在于提供一种基于低照度雾霾下监测无人机方法及系统,在暗通道先验理论的基础上,改进去雾方法,提高输出图像的质量,解决在低照度雾霾情况下对无人机监测问题,实现复杂条件下识别出无人机所携带的危险物品的目的。The purpose of the present invention is to provide a method and system for monitoring UAVs under low illumination haze. Based on the dark channel prior theory, the dehazing method is improved to improve the quality of the output image and solve the problem of low illumination haze. For the problem of drone monitoring, the purpose of identifying dangerous items carried by drones under complex conditions is achieved.
为达成上述目的,本发明提出如下技术方案:In order to achieve the above objects, the present invention proposes the following technical solutions:
第一方面,公开一种基于低照度雾霾下监测无人机方法,包括:The first aspect is to disclose a method for monitoring drones under low illumination haze, including:
获取输入图像的像素,采用边窗滤波和快速保边滤波混合方式对输入图像进行预处理,获得输入图像的环境光估计值;Obtain the pixels of the input image, use a hybrid method of edge window filtering and fast edge-preserving filtering to preprocess the input image, and obtain the ambient light estimate of the input image;
根据所述环境光估计值,设置输入图像的光源阈值;Set the light source threshold of the input image according to the ambient light estimate;
根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率;Divide the light source area and non-light source area of the input image according to the light source threshold, use an adaptive light source matrix mechanism to optimize the transmittance of the light source area and non-light source area respectively, and then fuse and solve the initial transmittance;
对所述初始透射率进行光源补偿,获得并输出输入图像的最终透射率;Perform light source compensation on the initial transmittance to obtain and output the final transmittance of the input image;
根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像进行去雾计算,获得去雾后的输出图像。According to the ambient light estimate and the final transmittance, a defogging calculation is performed on the input image according to an atmospheric scattering model to obtain a dehazed output image.
进一步的,所述采用边窗滤波和快速保边滤波混合方式对输入图像进行处理的过程,包括:Further, the process of processing the input image using a hybrid method of edge window filtering and fast edge-preserving filtering includes:
获取输入图像的像素,采用边窗滤波将每个像素均视为潜在边缘,并在各像素周围生成若干边缘窗口;Obtain the pixels of the input image, use side window filtering to treat each pixel as a potential edge, and generate several edge windows around each pixel;
对所述输入图像进行处理,包括调整所述输入图像的亮度,获得并输出各像素与其输入图像欧式距离最小的边缘窗口,以便保留所述输入图像的边缘信息;Processing the input image, including adjusting the brightness of the input image, obtaining and outputting an edge window with the smallest Euclidean distance between each pixel and the input image, so as to retain the edge information of the input image;
根据输出的所述距离最小的边缘窗口获得滤波图像,采用快速保边滤波对所述滤波图像进行处理,获得预处理图像;Obtain a filtered image according to the output edge window with the smallest distance, use fast edge-preserving filtering to process the filtered image, and obtain a preprocessed image;
根据所述预处理图像,采用暗通道先验方式计算环境光估计值。Based on the preprocessed image, the ambient light estimate is calculated using a dark channel prior method.
进一步的,所述根据所述环境光估计值,设置输入图像的光源阈值的过程,包括:Further, the process of setting the light source threshold of the input image according to the ambient light estimate includes:
计算输入图像中各像素点的光度值;Calculate the photometric value of each pixel in the input image;
计算所述光度值与环境光估计值的差值,以最大的所述差值的绝对值作为所述光源阈值。The difference between the photometric value and the estimated ambient light value is calculated, and the maximum absolute value of the difference is used as the light source threshold.
进一步的,所述根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率的过程,包括:Further, the process of dividing the light source area and non-light source area of the input image according to the light source threshold, using an adaptive light source matrix mechanism to respectively optimize the transmittance of the light source area and the non-light source area, and then fusion to solve the initial transmittance includes: :
对所述输入图像中任一像素点,划分像素点属于光源区域或非光源区域;其中,当像素点的光度值大于光源阈值,划分像素点属于光源区域,否则划分像素点属于非光源区域;For any pixel in the input image, classify the pixel as belonging to the light source area or non-light source area; wherein, when the luminosity value of the pixel is greater than the light source threshold, the pixel is classified as belonging to the light source area, otherwise the pixel is classified as belonging to the non-light source area;
当所述像素点属于光源区域,采用自适应光源矩阵机制计算并优化所述光源区域的透射率;包括:When the pixel belongs to the light source area, an adaptive light source matrix mechanism is used to calculate and optimize the transmittance of the light source area; including:
根据光源矩阵机制,按光度值降序对输入图像中每个像素点进行编号,计算各像素点的第一光源影响矩阵、第一光源影响矩阵和像素光源影响矩阵;具体的,定义所述输入图像的尺寸为w*h,x∈[1,w[,y∈[1,h[,按照光度值降序对每个像素点进行编号为{m0,m1,...,mT-1},其中,T为像素点总数;则,According to the light source matrix mechanism, each pixel in the input image is numbered in descending order of photometric value, and the first light source influence matrix, the first light source influence matrix and the pixel light source influence matrix of each pixel are calculated; specifically, the input image is defined The size of }, where T is the total number of pixels; then,
在x取值下,所述第一光源影响矩阵kx计算如下:Under the value of x, the first light source influence matrix k x is calculated as follows:
在y取值下,所述第二光源影响矩阵ky计算如下:Under the value of y, the second light source influence matrix k y is calculated as follows:
所述像素光源影响矩阵Kx计算如下:The pixel light source influence matrix K x is calculated as follows:
其中,Cx、Cy代表对应x、y取值下所述像素点的光度值,M是光源阈值,dx,m代表其他像素到选定像素点的距离;Among them, C x and C y represent the luminosity value of the pixel point corresponding to the value of x and y, M is the light source threshold, d x and m represent the distance from other pixels to the selected pixel point;
根据所述像素光源影响矩阵,获得优化所述光源区域透射率的调整校正系数wx,According to the pixel light source influence matrix, the adjustment correction coefficient w x that optimizes the transmittance of the light source area is obtained,
其中,α表示输入图像的调整校正因子,tx初始透射率;Among them, α represents the adjustment correction factor of the input image, t x initial transmittance;
当所述像素点属于非光源区域,通过暗通道先验理论计算所述非光源区域的透射率;When the pixel belongs to a non-light source area, the transmittance of the non-light source area is calculated through the dark channel prior theory;
根据优化后所述光源区域的透射率和所述非光源区域的透射率,融合计算初始透射率tM,Based on the optimized transmittance of the light source area and the transmittance of the non-light source area, the initial transmittance t M is calculated through fusion,
其中,Ω表示光源区域,表示非光源区域的透射率,wxtx∈Ω表示光源区域的透射率。Among them, Ω represents the light source area, represents the transmittance of the non-light source area, w x t x∈Ω represents the transmittance of the light source area.
进一步的,所述对所述初始透射率进行光源补偿,获得并输出输入图像的最终透射率的过程为:Further, the process of performing light source compensation on the initial transmittance and obtaining and outputting the final transmittance of the input image is:
采用伽马校正方式,通过调整补偿系数对初始透射率进行光源补偿,获得最终透射率。Using the gamma correction method, the initial transmittance is compensated for the light source by adjusting the compensation coefficient to obtain the final transmittance.
进一步的,所述获得去雾后的输出图像的过程,包括:Further, the process of obtaining the dehazed output image includes:
根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像中任一像素点进行去雾计算,公式为:According to the ambient light estimate and the final transmittance, any pixel in the input image is dehazed according to the atmospheric scattering model. The formula is:
其中,Ix表示输入图像中任一像素点的初始图像,tF是最终透射率,Jx表示该像素点去雾处理后图像;Among them, I x represents the initial image of any pixel in the input image, t F is the final transmittance, and J x represents the image after dehazing of the pixel;
综合各像素点去雾处理后图像,获得去雾后的输出图像。The dehazed image of each pixel is combined to obtain the dehazed output image.
第二方面,公开一种基于低照度雾霾下监测无人机系统,包括:In the second aspect, a drone system for monitoring under low illumination haze is disclosed, including:
获取处理模块,用于获取输入图像的像素,采用边窗滤波和快速保边滤波混合方式对输入图像进行预处理,获得输入图像的环境光估计值;The acquisition processing module is used to acquire the pixels of the input image, preprocess the input image using a hybrid method of edge window filtering and fast edge-preserving filtering, and obtain the ambient light estimate of the input image;
设置模块,用于根据所述环境光估计值,设置输入图像的光源阈值;A setting module configured to set the light source threshold of the input image according to the ambient light estimate;
划分求解模块,用于根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率;A division and solution module, used to divide the light source area and non-light source area of the input image according to the light source threshold, use an adaptive light source matrix mechanism to respectively optimize the transmittance of the light source area and the non-light source area, and then fuse and solve the initial transmittance;
补偿模块,用于对所述初始透射率进行光源补偿,获得并输出输入图像的最终透射率;A compensation module, used to perform light source compensation on the initial transmittance, and obtain and output the final transmittance of the input image;
计算模块,用于根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像进行去雾计算,获得去雾后的输出图像。A calculation module configured to perform dehazing calculation on the input image based on the ambient light estimate and the final transmittance and an atmospheric scattering model to obtain a dehazed output image.
进一步的,所述获取处理模块获得输入图像的环境光估计值的执行单元,包括:Further, the execution unit for obtaining the ambient light estimate of the input image by the acquisition processing module includes:
获取单元,用于获取输入图像的像素,采用边窗滤波将每个像素均视为潜在边缘,并在各像素周围生成若干边缘窗口;The acquisition unit is used to obtain the pixels of the input image, uses side window filtering to treat each pixel as a potential edge, and generates several edge windows around each pixel;
调整单元,用于对所述输入图像进行处理,包括调整所述输入图像的亮度,获得并输出各像素与其输入图像欧式距离最小的边缘窗口,以便保留所述输入图像的边缘信息;An adjustment unit, configured to process the input image, including adjusting the brightness of the input image, obtaining and outputting an edge window with the smallest Euclidean distance between each pixel and the input image, so as to retain the edge information of the input image;
处理单元,用于根据输出的所述距离最小的边缘窗口获得滤波图像,采用快速保边滤波对所述滤波图像进行处理,获得预处理图像;A processing unit, configured to obtain a filtered image according to the output edge window with the smallest distance, and use fast edge-preserving filtering to process the filtered image to obtain a preprocessed image;
计算单元,用于根据所述预处理图像,采用暗通道先验方式计算环境光估计值。A calculation unit configured to calculate the ambient light estimate using a dark channel prior method based on the preprocessed image.
第三方面,公开一种计算机设备,包括至少一个处理器,所述处理器和存储器耦合,所述存储器存储有在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如上述的基于低照度雾霾下监测无人机方法的步骤。In a third aspect, a computer device is disclosed, including at least one processor, the processor being coupled to a memory, the memory storing programs or instructions running on the processor, and the programs or instructions being processed by the processor. When the device is executed, the above-mentioned steps based on the method of monitoring UAVs under low illumination haze are implemented.
第四方面,公开一种可读存储介质,其上存储有程序或指令,其特征在于,所述程序或指令被处理器执行时实现如上述的基于低照度雾霾下监测无人机方法的步骤。The fourth aspect discloses a readable storage medium on which a program or instructions are stored, characterized in that when the program or instructions are executed by a processor, the above-mentioned method based on monitoring drones under low illumination haze is realized. step.
由以上技术方案可知,本发明的技术方案获得了如下有益效果:It can be seen from the above technical solutions that the technical solutions of the present invention achieve the following beneficial effects:
本发明公开的基于低照度雾霾下监测无人机方法及系统,旨在于解决在低照度雾霾情况下对无人机监测问题,具体应用时的优势如下:The method and system for monitoring drones under low illumination and haze disclosed in the present invention are intended to solve the problem of monitoring drones under low illumination and haze conditions. The advantages of specific applications are as follows:
(1)方案以边窗滤波和快速保边滤波混合的方式,对低照度雾霾图像进行预处理,消除高光源区域和伪光源区域对图像求解环境光估计值的影响,提高了环境光估计值的准确性;解决了低照度雾霾图像经过去雾后显示过暗的问题,可以更清晰的监测到无人机。(1) The solution uses a hybrid method of edge window filtering and fast edge-preserving filtering to preprocess low-illumination haze images to eliminate the impact of high light source areas and pseudo light source areas on the image to solve the ambient light estimation value, and improve the ambient light estimation The accuracy of the value; it solves the problem of low-illumination haze images being too dark after dehazing, allowing drones to be detected more clearly.
(2)方案根据图像的亮度值和处理过后的环境光估计值,设置光源阈值,采用光源矩阵机制对不同的光源区域的透射率融合求解,再通过伽马校正进行光源补偿,得到最终的透射率图像,解决了低照度图像在去雾后光源区域产生光晕效应,提高了图像得纹理细节,可以更好的监测到低照度雾霾情况下的无人机、识别出无人机所携带的危险物品。(2) The plan sets the light source threshold according to the brightness value of the image and the processed ambient light estimate, uses the light source matrix mechanism to solve the transmittance fusion of different light source areas, and then performs light source compensation through gamma correction to obtain the final transmission High-efficiency images solve the problem of the halo effect in the light source area of low-illumination images after dehazing, improve the texture details of the image, and can better detect drones in low-illumination haze situations and identify the objects carried by drones. of dangerous goods.
(3)方案通过修改环境光估计和优化透射率的求解过程,提高了天空区域的显示效果,减轻了色差,解决了容易产生伪影和噪声的问题,防止影响到无人机的监测效果,无法清晰的识别出所携带的危险物品。(3) The solution improves the display effect of the sky area by modifying the ambient light estimation and optimizing the solution process of transmittance, reduces the chromatic aberration, solves the problem of artifacts and noise, and prevents the monitoring effect of the drone from being affected. The dangerous goods carried cannot be clearly identified.
应当理解,前述构思以及在下面更加详细地描述的额外构思的所有组合只要在这样的构思不相互矛盾的情况下都可以被视为本公开的发明主题的一部分。It should be understood that all combinations of the foregoing concepts as well as additional concepts described in more detail below can be considered part of the inventive subject matter of the present disclosure so long as such concepts are not inconsistent with each other.
结合附图从下面的描述中可以更加全面地理解本发明教导的前述和其他方面、实施例和特征。本发明的其他附加方面例如示例性实施方式的特征和/或有益效果将在下面的描述中显见,或通过根据本发明教导的具体实施方式的实践中得知。The foregoing and other aspects, embodiments and features of the present teachings will be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the invention, such as features and/or advantages of the exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the invention.
附图说明Description of the drawings
附图不表示按照真实参照物比例绘制。在附图中,在各个图中示出的每个相同或近似相同的组成部分可以用相同的标号表示。为了清晰起见,在每个图中,并非每个组成部分均被标记。现在,将通过例子并参考附图来描述本发明的各个方面的实施例,其中:The drawings are not meant to be drawn to actual reference scale. In the drawings, each identical or approximately identical component shown in various figures may be designated by the same reference numeral. For clarity, not every component is labeled in each figure. Embodiments of various aspects of the invention will now be described by way of example and with reference to the accompanying drawings, in which:
图1为本发明公开的基于低照度雾霾下监测无人机方法的流程图;Figure 1 is a flow chart of a method for monitoring drones under low illumination haze disclosed by the present invention;
图2为本发明获得输入图像的环境光估计值的流程图;Figure 2 is a flow chart of the present invention for obtaining the ambient light estimation value of the input image;
图3为本发明设置输入图像的光源阈值的流程图;Figure 3 is a flow chart for setting the light source threshold of the input image according to the present invention;
图4为本发明公开的基于低照度雾霾下监测无人机系统的框架图;Figure 4 is a framework diagram of a UAV system for monitoring under low illumination haze disclosed by the present invention;
图5为根据本申请实施例的一种电子设备的示意图;Figure 5 is a schematic diagram of an electronic device according to an embodiment of the present application;
图6中(a)、(b)、(c)均为实施例中低照度雾霾条件下无人机监测拍摄图;(a), (b), and (c) in Figure 6 are all pictures taken by UAV monitoring under low illumination haze conditions in the embodiment;
图7中(a)、(b)、(c)为对应图6无人机监测混合滤波处理后效果图;(a), (b), and (c) in Figure 7 are the renderings after the hybrid filtering process corresponding to the drone monitoring in Figure 6;
图8中(a)、(b)、(c)为对应图7自适应光源矩阵机制融合求解后的透射率图像;(a), (b), and (c) in Figure 8 are the transmittance images corresponding to the fusion solution of the adaptive light source matrix mechanism in Figure 7;
图9中(a)、(b)、(c)为对应图8去雾后无人机监测效果图。(a), (b), and (c) in Figure 9 are corresponding to the UAV monitoring renderings after dehazing in Figure 8.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例的附图,对本发明实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于所描述的本发明的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。除非另作定义,此处使用的技术术语或者科学术语应当为本发明所属领域内具有一般技能的人士所理解的通常意义。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings of the embodiments of the present invention. Obviously, the described embodiments are some, but not all, of the embodiments of the present invention. Based on the described embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention. Unless otherwise defined, technical or scientific terms used herein shall have their ordinary meaning understood by a person of ordinary skill in the art to which this invention belongs.
本发明专利申请说明书以及权利要求书中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,除非上下文清楚地指明其它情况,否则单数形式的“一个”“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现在“包括”或者“包含”前面的元件或者物件涵盖出现在“包括”或者“包含”后面列举的特征、整体、步骤、操作、元素和/或组件,并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。“上”“下”“左”“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。"First", "second" and similar words used in the specification and claims of the patent application of the present invention do not indicate any order, quantity or importance, but are only used to distinguish different components. Likewise, the singular forms "a", "an" or "the" and similar words do not imply a limitation on quantity but rather indicate the presence of at least one, unless the context clearly dictates otherwise. The words "include" or "include" and similar words mean that the elements or things appearing before "include" or "include" include the features, integers, steps, operations, elements and/or listed after "include" or "include". or component, does not exclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or collections thereof. "Up", "down", "left", "right", etc. are only used to express relative positional relationships. When the absolute position of the described object changes, the relative positional relationship may also change accordingly.
基于现有技术在利用图片技术实现低照度雾霾情况下对无人机监测时存在如下问题,包括:1)现有的暗通道先验理论,在处理白天雾霾图像时,选择的环境光估计值,比较单一,在夜晚情况下,受各种灯光的影响,导致环境光值的求解,出现偏差,去雾效果偏暗;2)处理透射率时引导滤波虽然能缓解图像分块带来的透射率图像的块效应问题,但在两相邻块的粗透射率值相差较大时,残留块效应仍然比较显著,这会导致最终输出的图像强灯光附近出现明显的光晕效应;并且现有算法虽然做出了改进,但忽视了天空区域的影响,造成图像天空区域出现色差、伪影。因此,本发明提出一种能应用于低照度雾霾等特殊场景中的图片低照度去雾处理方法,解决上述问题,实现对无人机监测及识别携带的危险物品。Based on the existing technology, there are the following problems when using image technology to monitor drones in low-light haze conditions, including: 1) The existing dark channel prior theory, when processing daytime haze images, the selected ambient light The estimated value is relatively simple. At night, it is affected by various lights, causing deviations in the solution of ambient light values, and the dehazing effect is darker; 2) Although guided filtering can alleviate the problems caused by image blocking when processing transmittance The block effect problem of the transmittance image, but when the rough transmittance values of two adjacent blocks differ greatly, the residual block effect is still significant, which will lead to an obvious halo effect near strong light in the final output image; and Although the existing algorithm has been improved, it ignores the influence of the sky area, causing chromatic aberration and artifacts in the sky area of the image. Therefore, the present invention proposes a low-illumination defogging method for pictures that can be applied in special scenes such as low-illumination haze to solve the above problems and realize the monitoring of drones and the identification of dangerous items carried.
下面结合附图所示的实施例,对本发明公开的基于低照度雾霾下监测无人机方法及系统作进一步具体介绍。The following is a further detailed introduction to the method and system for monitoring drones under low illumination haze disclosed in the present invention with reference to the embodiments shown in the accompanying drawings.
结合图1所示,实施例公开的基于低照度雾霾下监测无人机方法,包括如下步骤:As shown in Figure 1, the method disclosed in the embodiment for monitoring drones under low illumination haze includes the following steps:
步骤S102,获取输入图像的像素,采用边窗滤波和快速保边滤波混合方式对输入图像进行预处理,获得输入图像的环境光估计值;本步骤目的在于对图像的高光源点和伪光源点进行处理,消除对环境光估计的影响。Step S102, obtain the pixels of the input image, use a hybrid method of edge window filtering and fast edge-preserving filtering to preprocess the input image, and obtain the ambient light estimate of the input image; the purpose of this step is to detect the highlight points and pseudo light source points of the image Processing is performed to eliminate the impact on ambient light estimation.
步骤S104,根据所述环境光估计值,设置输入图像的光源阈值;Step S104, set the light source threshold of the input image according to the ambient light estimation value;
步骤S106,根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率;Step S106, divide the light source area and non-light source area of the input image according to the light source threshold, use an adaptive light source matrix mechanism to optimize the transmittance of the light source area and the non-light source area respectively, and then fuse and solve the initial transmittance;
步骤S108,对所述初始透射率进行光源补偿,获得并输出输入图像的最终透射率;Step S108, perform light source compensation on the initial transmittance, and obtain and output the final transmittance of the input image;
步骤S110,根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像进行去雾计算,获得去雾后的输出图像。Step S110: Perform defogging calculation on the input image based on the ambient light estimate value and final transmittance based on the atmospheric scattering model to obtain a dehazed output image.
具体的如图2所示,采用边窗滤波和快速保边滤波混合方式对输入图像进行处理的过程,包括如下步骤:步骤S1022,获取输入图像的像素,采用边窗滤波将每个像素均视为潜在边缘,并在各像素周围生成若干边缘窗口;步骤S1024,对所述输入图像进行处理,包括调整所述输入图像的亮度,获得并输出各像素与其输入图像欧式距离最小的边缘窗口,以便保留所述输入图像的边缘信息;步骤S1026,根据输出的所述距离最小的边缘窗口获得滤波图像,采用快速保边滤波对所述滤波图像进行处理,获得预处理图像;步骤S1028,根据所述预处理图像,采用暗通道先验方式计算环境光估计值。Specifically as shown in Figure 2, the process of processing the input image using a hybrid method of edge window filtering and fast edge-preserving filtering includes the following steps: Step S1022, obtain the pixels of the input image, and use edge window filtering to treat each pixel equally. as potential edges, and generate several edge windows around each pixel; step S1024, process the input image, including adjusting the brightness of the input image, and obtain and output the edge window with the smallest Euclidean distance between each pixel and its input image, so that Preserve the edge information of the input image; Step S1026, obtain a filtered image according to the output edge window with the smallest distance, use fast edge-preserving filtering to process the filtered image, and obtain a preprocessed image; Step S1028, according to the Preprocess the image and calculate the ambient light estimate using a dark channel prior.
其中,步骤S1022至步骤S1024采用边窗滤波处理获得距离最小的边缘窗口的计算过程如下:Among them, the calculation process of using side window filtering to obtain the edge window with the smallest distance from step S1022 to step S1024 is as follows:
其中,S={U,D,L,R,NM,NE,SW,SE},表示像素周围生成的八个边缘窗口集合;n∈S,表示集合S的任一边缘窗口;In表示边缘窗口的滤波值;qi表示基于核函数F的目标像素i附近像素j的权重;wij为像素的像素值;Nn表示八个边缘窗口像素值总和;Isw表示与像素输入图像欧式距离最小的边缘窗口。Among them, S = {U, D, L, R, NM, NE, SW, SE}, represents the set of eight edge windows generated around the pixel; n∈S, represents any edge window of the set S; I n represents the edge The filter value of the window; q i represents the weight of the pixel j near the target pixel i based on the kernel function F; w ij is the pixel value of the pixel; N n represents the sum of the pixel values of the eight edge windows; I sw represents the Euclidean distance from the pixel input image Minimal edge window.
步骤S104根据所述环境光估计值,设置输入图像的光源阈值的过程,如图3所示包括如下步骤:步骤S1042,计算输入图像中各像素点的光度值;步骤S1044,计算所述光度值与环境光估计值的差值,以最大的所述差值的绝对值作为所述光源阈值。Step S104 is a process of setting the light source threshold of the input image according to the estimated ambient light value. As shown in Figure 3, it includes the following steps: Step S1042, calculate the photometric value of each pixel in the input image; Step S1044, calculate the photometric value The difference between the ambient light estimate and the ambient light estimate is the largest absolute value of the difference as the light source threshold.
即,求取图像每个像素点的光度值Ix,与环境光估计值A做差值,将|Ix-A|max作为光源阈值,用来区分光源区域和非光源区域。That is, the luminosity value I x of each pixel point in the image is obtained, and the difference is made with the ambient light estimated value A, and |I x -A| max is used as the light source threshold to distinguish the light source area from the non-light source area.
具体实施例中,上述步骤S106根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率的过程,包括如下计算过程:In a specific embodiment, the above-mentioned step S106 divides the light source area and non-light source area of the input image according to the light source threshold, uses an adaptive light source matrix mechanism to separately optimize the transmittance of the light source area and the non-light source area, and then fuses and solves the initial transmittance. process, including the following calculation process:
对所述输入图像中任一像素点,划分像素点属于光源区域或非光源区域;其中,当像素点的光度值大于光源阈值,划分像素点属于光源区域,否则划分像素点属于非光源区域;For any pixel in the input image, classify the pixel as belonging to the light source area or non-light source area; wherein, when the luminosity value of the pixel is greater than the light source threshold, the pixel is classified as belonging to the light source area, otherwise the pixel is classified as belonging to the non-light source area;
当所述像素点属于光源区域,即Ix>|Ix-A|max,采用自适应光源矩阵机制计算并优化所述光源区域的透射率;包括:When the pixel belongs to the light source area, that is, I x > |I x -A| max , the adaptive light source matrix mechanism is used to calculate and optimize the transmittance of the light source area; including:
根据光源矩阵机制,按光度值降序对输入图像中每个像素点进行编号,计算各像素点的第一光源影响矩阵、第一光源影响矩阵和像素光源影响矩阵;具体的,定义所述输入图像的尺寸为w*h,x∈[1,w],y∈[1,h],按照光度值降序对每个像素点进行编号为{m0,m1,...,mT-1},其中,T为像素点总数;则,According to the light source matrix mechanism, each pixel in the input image is numbered in descending order of photometric value, and the first light source influence matrix, the first light source influence matrix and the pixel light source influence matrix of each pixel are calculated; specifically, the input image is defined The size of }, where T is the total number of pixels; then,
在x取值下,所述第一光源影响矩阵kx计算如下:Under the value of x, the first light source influence matrix k x is calculated as follows:
在y取值下,所述第二光源影响矩阵ky计算如下:Under the value of y, the second light source influence matrix k y is calculated as follows:
所述像素光源影响矩阵Kx计算如下:The pixel light source influence matrix K x is calculated as follows:
其中,Cx、Cy代表对应x、y取值下所述像素点的光度值,M是光源阈值,dx,m代表其他像素到选定像素点的距离;Among them, C x and C y represent the luminosity value of the pixel point corresponding to the value of x and y, M is the light source threshold, d x and m represent the distance from other pixels to the selected pixel point;
根据所述像素光源影响矩阵,获得优化所述光源区域透射率的调整校正系数wx,According to the pixel light source influence matrix, the adjustment correction coefficient w x that optimizes the transmittance of the light source area is obtained,
其中,α表示输入图像的调整校正因子,实施例中设置为0.5;tx表示初始透射率;Among them, α represents the adjustment correction factor of the input image, which is set to 0.5 in the embodiment; t x represents the initial transmittance;
当所述像素点属于非光源区域,即Ix≤|Ix-A|max,通过暗通道先验理论计算所述非光源区域的透射率;When the pixel belongs to the non-light source area, that is, I x ≤ |I x -A| max , the transmittance of the non-light source area is calculated through the dark channel prior theory;
根据优化后所述光源区域的透射率和所述非光源区域的透射率,融合计算初始透射率tM,Based on the optimized transmittance of the light source area and the transmittance of the non-light source area, the initial transmittance t M is calculated through fusion,
其中,Ω表示光源区域,表示非光源区域的透射率,wxtx∈Ω表示光源区域的透射率。Among them, Ω represents the light source area, represents the transmittance of the non-light source area, w x t x∈Ω represents the transmittance of the light source area.
进一步结合实施例,本发明采用伽马校正方式,通过调整补偿系数对初始透射率进行光源补偿,最终输出输入图像的最终透射率;实施时,将补偿系数设置为0.8。Further combined with the embodiment, the present invention uses a gamma correction method to perform light source compensation on the initial transmittance by adjusting the compensation coefficient, and finally outputs the final transmittance of the input image; during implementation, the compensation coefficient is set to 0.8.
本发明步骤S110获得去雾后的输出图像的过程,包括如下计算过程:The process of obtaining the defogged output image in step S110 of the present invention includes the following calculation process:
首先,根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像中任一像素点进行去雾计算,公式为:First, based on the ambient light estimate and final transmittance, any pixel in the input image is dehazed according to the atmospheric scattering model. The formula is:
其中,Ix表示输入图像中任一像素点的初始图像,tF是最终透射率,Jx表示该像素点去雾处理后图像;Among them, I x represents the initial image of any pixel in the input image, t F is the final transmittance, and J x represents the image after dehazing of the pixel;
然后,综合各像素点去雾处理后图像,获得去雾后的输出图像。Then, the dehazed image of each pixel is combined to obtain the dehazed output image.
本发明公开的基于低照度雾霾下监测无人机方法,先通过边窗滤波和快速保边滤波混合的方式,对低照度雾霾图像进行预处理,达到消除高光源区域和伪光源区域对图像求解环境光估计值影响的目的;然后根据图像和环境光估计值设置光源阈值,采用光源矩阵机制对不同的光源区域的透射率融合求解,再通过伽马校正进行光源补偿,得到最终的透射率图像,解决低照度图像在去雾后光源区域产生光晕效应的问题;最终,通过修改环境光估计和优化透射率的求解过程,提高天空区域的显示效果,减轻色差,解决图像处理过程容易产生伪影和噪声的问题。本发明的方法能有效应用于低照度雾霾等特殊场景下的无人机识别问题,提升对无人机的安全管控。The method disclosed by the present invention based on monitoring drones under low-illumination haze first pre-processes low-illumination haze images through a mixture of edge window filtering and fast edge-preserving filtering to eliminate the pairing of high light source areas and false light source areas. The purpose of image solving is to solve the impact of ambient light estimates; then set the light source threshold according to the image and ambient light estimates, use the light source matrix mechanism to solve the transmittance fusion of different light source areas, and then perform light source compensation through gamma correction to obtain the final transmission rate image to solve the problem of halo effect in the light source area of low-illumination images after dehazing; finally, by modifying the ambient light estimation and optimizing the solution process of transmittance, the display effect of the sky area is improved, chromatic aberration is reduced, and the image processing process is easily solved Problems with artifacts and noise. The method of the present invention can be effectively applied to the problem of UAV identification in special scenes such as low illumination haze and improves the safety management and control of UAVs.
本发明的实施例中,还提供一种计算机设备,如图5所示,该设备包括至少一个处理器,所述处理器和存储器耦合,所述存储器存储有在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如上述实施例公开的基于低照度雾霾下监测无人机方法的步骤。In an embodiment of the present invention, a computer device is also provided. As shown in Figure 5, the device includes at least one processor, the processor is coupled to a memory, and the memory stores programs that run on the processor. or instructions. When the programs or instructions are executed by the processor, the steps of the method for monitoring UAVs under low illumination haze as disclosed in the above embodiments are implemented.
上述程序可以运行在处理器中,或者也可以存储在存储器中,即计算机可读介质中,计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体,如调制的数据信号和载波。The above-mentioned program can be run in the processor, or can also be stored in the memory, that is, in a computer-readable medium. The computer-readable medium includes permanent and non-permanent, removable and non-removable media and can be stored in any method or technology. Implement information storage. Information may be computer-readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory. (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include temporary storage computer-readable media, such as modulated data signals and carrier waves.
这些计算机程序也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤,对应于不同的方法步骤可以通过不同的模块来实现。These computer programs may also be loaded onto a computer or other programmable data processing device such that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processes, thereby causing instructions to be executed on the computer or other programmable device Steps are provided for implementing the functions specified in one process or multiple processes of the flowchart and/or one or more boxes of the block diagram, which may be implemented by different modules corresponding to different method steps.
在本实施例中,就提供了这样一种装置或系统,该系统可以称为一种基于低照度雾霾下监测无人机系统,该系统如图4所示,包括:获取处理模块,用于获取输入图像的像素,采用边窗滤波和快速保边滤波混合方式对输入图像进行预处理,获得输入图像的环境光估计值;设置模块,用于根据所述环境光估计值,设置输入图像的光源阈值;划分求解模块,用于根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率;补偿模块,用于对所述初始透射率进行光源补偿,获得并输出输入图像的最终透射率;计算模块,用于根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像进行去雾计算,获得去雾后的输出图像。In this embodiment, such a device or system is provided. The system can be called a drone system for monitoring drones under low illumination and haze. The system is shown in Figure 4 and includes: an acquisition processing module. In order to obtain the pixels of the input image, the input image is preprocessed using a hybrid method of edge window filtering and fast edge-preserving filtering to obtain an estimated ambient light value of the input image; a setting module is used to set the input image according to the estimated ambient light value. The light source threshold; the division and solution module is used to divide the light source area and the non-light source area of the input image according to the light source threshold, use the adaptive light source matrix mechanism to respectively optimize the transmittance of the light source area and the non-light source area, and then fuse and solve the initial transmission rate; the compensation module is used to perform light source compensation on the initial transmittance, and obtain and output the final transmittance of the input image; the calculation module is used to calculate the atmospheric scattering model according to the ambient light estimate and the final transmittance. Perform defogging calculation on the above input image to obtain the defogging output image.
该系统用于实现上述实施例公开的基于低照度雾霾下监测无人机系统的步骤,已经进行说明的,在此不再赘述。This system is used to implement the steps of the drone system based on low illumination haze monitoring disclosed in the above embodiments. The steps have already been explained and will not be repeated here.
例如,获取处理模块获得输入图像的环境光估计值的执行单元,包括:获取单元,用于获取输入图像的像素,采用边窗滤波将每个像素均视为潜在边缘,并在各像素周围生成若干边缘窗口;调整单元,用于对所述输入图像进行处理,包括调整所述输入图像的亮度,获得并输出各像素与其输入图像欧式距离最小的边缘窗口,以便保留所述输入图像的边缘信息;处理单元,用于根据输出的所述距离最小的边缘窗口获得滤波图像,采用快速保边滤波对所述滤波图像进行处理,获得预处理图像;计算单元,用于根据所述预处理图像,采用暗通道先验方式计算环境光估计值。For example, the acquisition processing module obtains the execution unit of the ambient light estimate of the input image, including: an acquisition unit, used to obtain the pixels of the input image, using side window filtering to treat each pixel as a potential edge, and generate a signal around each pixel. Several edge windows; an adjustment unit for processing the input image, including adjusting the brightness of the input image, obtaining and outputting an edge window with the smallest Euclidean distance between each pixel and the input image, so as to retain the edge information of the input image ; The processing unit is used to obtain a filtered image according to the output edge window with the smallest distance, and uses fast edge-preserving filtering to process the filtered image to obtain a preprocessed image; the computing unit is used to obtain a preprocessed image according to the preprocessed image, The ambient light estimate is calculated using a dark channel prior.
其中,调整单元获得各像素与其输入图像欧式距离最小的边缘窗口的过程如下:Among them, the process of the adjustment unit obtaining the edge window with the smallest Euclidean distance between each pixel and its input image is as follows:
其中,S={U,D,L,R,NM,NE,SW,SE},表示像素周围生成的八个边缘窗口集合;n∈S,表示集合S的任一边缘窗口;In表示边缘窗口的滤波值;qi表示基于核函数F的目标像素i附近像素j的权重;wij为像素的像素值;Nn表示八个边缘窗口像素值总和;Isw表示与像素输入图像欧式距离最小的边缘窗口。Among them, S = {U, D, L, R, NM, NE, SW, SE}, represents the set of eight edge windows generated around the pixel; n∈S, represents any edge window of the set S; I n represents the edge The filter value of the window; q i represents the weight of the pixel j near the target pixel i based on the kernel function F; w ij is the pixel value of the pixel; N n represents the sum of the pixel values of the eight edge windows; I sw represents the Euclidean distance from the pixel input image Minimal edge window.
又例如,设置模块根据所述环境光估计值,设置输入图像的光源阈值的过程,包括如下执行单元:第二计算单元,用于计算输入图像中各像素点的光度值;第三计算单元,用于计算所述光度值与环境光估计值的差值,以最大的所述差值的绝对值作为所述光源阈值。For another example, the setting module sets the light source threshold of the input image according to the estimated ambient light value, including the following execution unit: a second calculation unit, used to calculate the luminosity value of each pixel in the input image; a third calculation unit, It is used to calculate the difference between the photometric value and the estimated ambient light value, and the absolute value of the largest difference is used as the light source threshold.
又例如,划分求解模块根据所述光源阈值划分输入图像的光源区域和非光源区域,采用自适应光源矩阵机制分别优化所述光源区域和非光源区域的透射率后融合求解初始透射率的过程,包括如下计算过程:For another example, the division and solution module divides the light source area and non-light source area of the input image according to the light source threshold, uses an adaptive light source matrix mechanism to respectively optimize the transmittance of the light source area and the non-light source area, and then merges and solves the process of initial transmittance. Including the following calculation process:
对所述输入图像中任一像素点,划分像素点属于光源区域或非光源区域;其中,当像素点的光度值大于光源阈值,划分像素点属于光源区域,否则划分像素点属于非光源区域;For any pixel in the input image, classify the pixel as belonging to the light source area or non-light source area; wherein, when the luminosity value of the pixel is greater than the light source threshold, the pixel is classified as belonging to the light source area, otherwise the pixel is classified as belonging to the non-light source area;
当所述像素点属于光源区域,即Ix>|Ix-A|max,采用自适应光源矩阵机制计算并优化所述光源区域的透射率;包括:When the pixel belongs to the light source area, that is, I x > |I x -A| max , the adaptive light source matrix mechanism is used to calculate and optimize the transmittance of the light source area; including:
根据光源矩阵机制,按光度值降序对输入图像中每个像素点进行编号,计算各像素点的第一光源影响矩阵、第一光源影响矩阵和像素光源影响矩阵;具体的,定义所述输入图像的尺寸为w*h,x∈[1,w],y∈[1,h],按照光度值降序对每个像素点进行编号为{m0,m1,...,mT-1},其中,T为像素点总数;则,According to the light source matrix mechanism, each pixel in the input image is numbered in descending order of photometric value, and the first light source influence matrix, the first light source influence matrix and the pixel light source influence matrix of each pixel are calculated; specifically, the input image is defined The size of }, where T is the total number of pixels; then,
在x取值下,所述第一光源影响矩阵kx计算如下:Under the value of x, the first light source influence matrix k x is calculated as follows:
在y取值下,所述第二光源影响矩阵ky计算如下:Under the value of y, the second light source influence matrix k y is calculated as follows:
所述像素光源影响矩阵Kx计算如下:The pixel light source influence matrix K x is calculated as follows:
其中,Cx、Cy代表对应x、y取值下所述像素点的光度值,M是光源阈值,dx,m代表其他像素到选定像素点的距离;Among them, C x and C y represent the luminosity value of the pixel point corresponding to the value of x and y, M is the light source threshold, d x and m represent the distance from other pixels to the selected pixel point;
根据所述像素光源影响矩阵,获得优化所述光源区域透射率的调整校正系数wx,According to the pixel light source influence matrix, the adjustment correction coefficient w x that optimizes the transmittance of the light source area is obtained,
其中,α表示输入图像的调整校正因子,实施例中设置为0.5;tx表示初始透射率;Among them, α represents the adjustment correction factor of the input image, which is set to 0.5 in the embodiment; t x represents the initial transmittance;
当所述像素点属于非光源区域,即Ix≤|Ix-A|max,通过暗通道先验理论计算所述非光源区域的透射率;When the pixel belongs to the non-light source area, that is, I x ≤ |I x -A| max , the transmittance of the non-light source area is calculated through the dark channel prior theory;
根据优化后所述光源区域的透射率和所述非光源区域的透射率,融合计算初始透射率tM,Based on the optimized transmittance of the light source area and the transmittance of the non-light source area, the initial transmittance t M is calculated through fusion,
其中,Ω表示光源区域,表示非光源区域的透射率,wxtx∈Ω表示光源区域的透射率。Among them, Ω represents the light source area, represents the transmittance of the non-light source area, w x t x∈Ω represents the transmittance of the light source area.
又例如,计算模块获得去雾后的输出图像的过程,包括如下执行单元:As another example, the process of the computing module obtaining the output image after dehazing includes the following execution units:
第四计算单元,用于根据所述环境光估计值和最终透射率,依据大气散射模型对所述输入图像中任一像素点进行去雾计算,公式为:The fourth calculation unit is used to perform defogging calculations for any pixel in the input image based on the ambient light estimate and final transmittance and the atmospheric scattering model. The formula is:
其中,Ix表示输入图像中任一像素点的初始图像,tF是最终透射率,Jx表示该像素点去雾处理后图像;Among them, I x represents the initial image of any pixel in the input image, t F is the final transmittance, and J x represents the image after dehazing of the pixel;
输出单元,用于综合各像素点去雾处理后图像,获得去雾后的输出图像。The output unit is used to synthesize the dehazed image of each pixel point to obtain a dehazed output image.
下面结合附图所示的低照度雾霾场景下无人机飞行实验,对本发明的具体实施过程充分说明如下。The specific implementation process of the present invention is fully explained below in conjunction with the drone flight experiment in the low-illumination haze scene shown in the accompanying drawings.
首先利用边窗滤波和快速保边滤波混合处理的方式,对光源点进行预处理,消除在低照度雾霾场景下,高光源点和伪光源点亮度值对环境光估计值求解的影响;具体如图6和图7,其中,图6为低照度雾霾条件下的直拍图,图7为处理后的图像,提高了环境光值的准确性。First, a mixed processing method of edge window filtering and fast edge-preserving filtering is used to preprocess the light source points to eliminate the impact of the brightness values of high light source points and pseudo light source points on the solution of ambient light estimates in low-illumination haze scenes; specifically, As shown in Figures 6 and 7, Figure 6 is a direct shot under low illumination haze conditions, and Figure 7 is a processed image, which improves the accuracy of the ambient light value.
其次,计算图像每个像素的亮度值与环境光估计值做差值,最大值作为光源阈值,区分光源区域和非光源区域;对于非光源区域,透射率采用传统的暗通道先验理论求解;而对于光源区域,采用光源矩阵机制进行优化之后,融合求解透射率图像;然后,经过伽马校正的方式对透射率图像进行光源补偿,得到最终得透射率图像,如图8所示;从图中可以看出,整体亮度和细节有很大的提升,使得光源区域很好的保留下来,经过去雾时,不容易发生光晕效应。Secondly, the difference between the brightness value of each pixel in the image and the estimated ambient light value is calculated, and the maximum value is used as the light source threshold to distinguish the light source area from the non-light source area; for the non-light source area, the transmittance is solved using the traditional dark channel prior theory; For the light source area, after optimization using the light source matrix mechanism, the transmittance image is solved through fusion; then, the transmittance image is compensated for the light source through gamma correction to obtain the final transmittance image, as shown in Figure 8; from Figure It can be seen that the overall brightness and details have been greatly improved, so that the light source area is well preserved, and the halo effect is less likely to occur during dehazing.
最后,将以上步骤处理后的环境光估计值和透射率图像,对低照度雾霾图像进行去雾计算,得到去雾后的输出图像;如图9所示,图像整体显示明亮清晰,光源区域的光线保持良好,无光晕效应,减轻天空区域色差问题,无伪影和噪声的影响,可以有效的在低照度雾霾场下,对无人机很好的进行监控,在飞机起飞和降落这段时间里,识别出附近无人机的行动,避免重大事故的出现。Finally, the ambient light estimate and transmittance image processed in the above steps are used to dehaze the low-illumination haze image to obtain the output image after dehazing; as shown in Figure 9, the overall image is bright and clear, and the light source area The light is maintained well, without halo effect, reducing the problem of chromatic aberration in the sky area, without the influence of artifacts and noise. It can effectively monitor the drone in low illumination haze fields, and during the take-off and landing of the aircraft. During this time, the actions of nearby drones are identified to avoid major accidents.
虽然本发明已以较佳实施例揭露如上,然其并非用以限定本发明。本发明所属技术领域中具有通常知识者,在不脱离本发明的精神和范围内,当可作各种的更动与润饰。因此,本发明的保护范围当视权利要求书所界定者为准。Although the present invention has been disclosed above in terms of preferred embodiments, these are not intended to limit the present invention. Those with ordinary skill in the technical field to which the present invention belongs can make various modifications and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention shall be determined by the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311001014.0A CN116957984A (en) | 2023-08-10 | 2023-08-10 | Method and system for monitoring unmanned aerial vehicle based on low-illuminance haze |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311001014.0A CN116957984A (en) | 2023-08-10 | 2023-08-10 | Method and system for monitoring unmanned aerial vehicle based on low-illuminance haze |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116957984A true CN116957984A (en) | 2023-10-27 |
Family
ID=88461901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311001014.0A Pending CN116957984A (en) | 2023-08-10 | 2023-08-10 | Method and system for monitoring unmanned aerial vehicle based on low-illuminance haze |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116957984A (en) |
-
2023
- 2023-08-10 CN CN202311001014.0A patent/CN116957984A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9361670B2 (en) | Method and system for image haze removal based on hybrid dark channel prior | |
CN109493300B (en) | Aerial image real-time defogging method based on FPGA (field programmable Gate array) convolutional neural network and unmanned aerial vehicle | |
US9305242B2 (en) | Method and image processing apparatus for image visibility restoration using fisher's linear discriminant based dual dark channel prior | |
CN107895357B (en) | A kind of real-time water surface thick fog scene image Enhancement Method based on FPGA | |
CN106412448B (en) | Wide dynamic range processing method and system based on single frame image | |
CN103186887A (en) | Image demisting device and image demisting method | |
CN103226809B (en) | Image demister and image haze removal method | |
CN104766307A (en) | Picture processing method and device | |
Mei et al. | Single image dehazing using dark channel fusion and haze density weight | |
CN113034379B (en) | Weather and time-adaptive fast image sharpening processing method | |
CN111311503A (en) | A low-brightness image enhancement system at night | |
CN111311502A (en) | Method for processing foggy day image by using bidirectional weighted fusion | |
CN109118450A (en) | A kind of low-quality images Enhancement Method under the conditions of dust and sand weather | |
CN109345479B (en) | Real-time preprocessing method and storage medium for video monitoring data | |
CN119359769A (en) | Method, system, device and medium for cross-camera vehicle target tracking in highway tunnel based on deep learning | |
CN116957984A (en) | Method and system for monitoring unmanned aerial vehicle based on low-illuminance haze | |
Nie et al. | Image defogging based on joint contrast enhancement and multi-scale fusion | |
Chen et al. | Improved visibility of single hazy images captured in inclement weather conditions | |
CN117689561A (en) | Image fusion method based on self-adaptive visual enhancement and structural patch decomposition | |
CN113469889A (en) | Image noise reduction method and device | |
US11516450B1 (en) | Method of regional color processing based on video content and system using the same | |
CN117078562A (en) | Video image defogging method, device, computer equipment and medium | |
Zou et al. | Self-tuning underwater image fusion method based on dark channel prior | |
CN111028184B (en) | Image enhancement method and system | |
CN115334250A (en) | Image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |