CN115170437A - Fire scene low-quality image recovery method for rescue robot - Google Patents

Fire scene low-quality image recovery method for rescue robot Download PDF

Info

Publication number
CN115170437A
CN115170437A CN202210932068.8A CN202210932068A CN115170437A CN 115170437 A CN115170437 A CN 115170437A CN 202210932068 A CN202210932068 A CN 202210932068A CN 115170437 A CN115170437 A CN 115170437A
Authority
CN
China
Prior art keywords
image
atmospheric light
transmissivity
module
flare
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210932068.8A
Other languages
Chinese (zh)
Other versions
CN115170437B (en
Inventor
伊国栋
伊骊帆
裘乐淼
张树有
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210932068.8A priority Critical patent/CN115170437B/en
Publication of CN115170437A publication Critical patent/CN115170437A/en
Application granted granted Critical
Publication of CN115170437B publication Critical patent/CN115170437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a fire scene low-quality image recovery method for a rescue robot. According to the method, the flame region is segmented based on the region threshold segmentation algorithm, so that the influence of a flame light source on the global atmospheric light estimation is avoided; an atmospheric light detection operator is designed, accurate atmospheric light parameters are obtained based on the super-pixel block, and the problem of global atmospheric light estimation distortion is solved; a transmissivity estimation optimization module is constructed, the transmissivity is refined based on a bilateral weighting guide filtering method, and the problems of halation and the like caused by the traditional method are solved. The method provided by the invention is beneficial to improving the clarity of the fire scene image recovery, and provides a data basis for improving the scene detection, identification, environment mapping and path planning of the rescue robot.

Description

一种用于救援机器人的火灾场景低质量图像恢复方法A low-quality image restoration method of fire scene for rescue robot

技术领域technical field

本发明属于救援机器人图像处理领域,具体涉及一种用于救援机器人的火灾场景低质量图像恢复方法。The invention belongs to the field of image processing of rescue robots, and in particular relates to a low-quality image restoration method of fire scenes for rescue robots.

背景技术Background technique

在救援火灾环境下,燃烧通常伴随着不均匀的火光和烟雾,此类场景中救援机器人执行任务时,采集的图像通常会受到光线、火焰,雾气等环境因素的影响。In the fire rescue environment, the combustion is usually accompanied by uneven fire and smoke. When the rescue robot performs tasks in such scenes, the collected images are usually affected by environmental factors such as light, flame, and fog.

火灾场景中图像由于环境光分布不均匀,雾浓度不均匀,因此现有方法在处理火灾场景情况出现估计失真问题。通过对救援火灾环境低质量图像清晰化方法的研究,将低质退化图像恢复为高质量图像,为救援机器人的现场检测、识别、环境建图与路径规划提供了数据基础。Due to the uneven distribution of ambient light and the uneven fog density of images in fire scenes, the existing methods have the problem of estimation distortion in dealing with fire scenes. Through the research on the clearing method of low-quality images in the rescue fire environment, the low-quality degraded images are restored to high-quality images, which provides a data basis for on-site detection, recognition, environmental mapping and path planning of rescue robots.

发明内容SUMMARY OF THE INVENTION

为了解决背景技术中的问题,本发明提出了一种用于救援机器人的火灾场景低质量图像恢复方法。In order to solve the problems in the background art, the present invention proposes a low-quality image restoration method of a fire scene for a rescue robot.

本发明采用的技术方案如下:The technical scheme adopted in the present invention is as follows:

一、一种用于救援机器人的火灾场景低质量图像恢复系统1. A low-quality image restoration system for fire scenes for rescue robots

包括区域大气光估计模块、透射率估计优化模块和图像重建恢复模块,所述区域大气光估计模块包括火光区域大气光估计模块和全局大气光估计模块;It includes a regional atmospheric light estimation module, a transmittance estimation optimization module and an image reconstruction and restoration module, and the regional atmospheric light estimation module includes a fire regional atmospheric light estimation module and a global atmospheric light estimation module;

火光区域大气光估计模块,用于将原始图像分割为火光区域图像和非火光区域图像;The atmospheric light estimation module in the flare area is used to segment the original image into a flare area image and a non-fire area image;

全局大气光估计模块,用于对非火光区域图像进行超像素分割并设计大气光检测算子,得到全局大气光的估计值;The global atmospheric light estimation module is used to perform superpixel segmentation on the non-fire area image and design the atmospheric light detection operator to obtain the estimated value of the global atmospheric light;

透射率估计优化模块,用于估计粗略透射率,并运用双边加权引导滤波方法计算精确透射率;The transmittance estimation optimization module is used to estimate the rough transmittance, and use the bilateral weighted guided filtering method to calculate the precise transmittance;

图像重建恢复模块,用于重建恢复清晰图像。Image reconstruction and restoration module, used to reconstruct and restore clear images.

二、采用上述系统的用于救援机器人的火灾场景低质量图像恢复方法2. Low-quality image restoration method of fire scene for rescue robot using the above system

步骤1:通过火光区域大气光估计模块将原始图像I分割为火光区域图像IF和非火光区域图像INFStep 1: The original image I is divided into a fire area image IF and a non-fire area image I NF by the fire area atmospheric light estimation module;

步骤2:全局大气光估计模块通过构建大气光检测算子计算全局大气光估计值A0Step 2: The global atmospheric light estimation module calculates the global atmospheric light estimation value A 0 by constructing an atmospheric light detection operator;

步骤3:将暗通道图及全局大气光估计值输入透射率估计优化模块估计粗略透射率,运用双边加权滤波透射率优化得到精确透射率t;Step 3: Input the dark channel map and the estimated value of global atmospheric light into the transmittance estimation optimization module to estimate the rough transmittance, and use the bilateral weighted filter transmittance optimization to obtain the precise transmittance t;

步骤4:通过图像重建恢复模块恢复得到最终清晰图像。Step 4: Restoring the final clear image through the image reconstruction and restoration module.

所述步骤1)具体为:Described step 1) is specifically:

1.1)结合RGB色彩空间判据及HLS色彩空间判据分割原始图像I得到初始火光区域图像I11.1) combine the RGB color space criterion and the HLS color space criterion to divide the original image I to obtain the initial flare area image I 1 ;

1.2)将初始火光区域图像I1执行先开后闭的形态学处理,删除图像中的孤立点并填充区域图像I1内部的孔洞,进一步对形态学处理后的图像进行高斯滤波,得到最终的火光区域图像IF;并根据原始图像I和火光区域图像IF获得非火光区域图像INF1.2) Perform the morphological processing of opening and closing the initial flare area image I1 , delete the isolated points in the image and fill the holes inside the area image I1 , and further perform Gaussian filtering on the morphologically processed image to obtain the final result. The flare area image IF ; and the non-flare area image I NF is obtained according to the original image I and the flare area image IF .

所述步骤2)具体为:Described step 2) is specifically:

2.1)对原始图像I的R,G,B三通道进行最小值滤波,得到图像的暗通道图Id2.1) minimum value filtering is performed on the R, G, B three channels of the original image I to obtain the dark channel map I d of the image;

2.2)对原始图像I,构建大气光检测算子score:2.2) For the original image I, construct the atmospheric light detection operator score:

score=(1-S)Id score=(1-S) Id

其中,S为原始图像I的饱和度分量;Wherein, S is the saturation component of the original image I;

2.3)对非火光区域图像INF进行超像素分割,得到分割图Is,计算分割图Is(si∈Is)中每个超像素块si的大气光检测算子得分

Figure BDA0003781883980000021
2.3) Perform superpixel segmentation on the non-fire area image I NF to obtain a segmentation map Is, and calculate the atmospheric light detection operator score of each superpixel block s i in the segmentation map Is ( si ∈I s )
Figure BDA0003781883980000021

Figure BDA0003781883980000022
Figure BDA0003781883980000022

其中,

Figure BDA0003781883980000023
为超像素块si中像素点的个数,x为超像素块si中的像素点;in,
Figure BDA0003781883980000023
is the number of pixels in the superpixel block si , and x is the pixel in the superpixel block si ;

2.4)将每个超像素块对应的

Figure BDA0003781883980000024
值按由大到小的顺序进行排序,选取大气光检测算子得分最大的超像素块,记为smax,计算超像素块smax中所有像素点的像素值平均值得到全局大气光的估计值A0:2.4) Each superpixel block corresponds to
Figure BDA0003781883980000024
The values are sorted in descending order, and the superpixel block with the largest atmospheric light detection operator score is selected, denoted as s max , and the average value of all pixels in the superpixel block s max is calculated to obtain the estimation of the global atmospheric light Value A 0 :

Figure BDA0003781883980000025
Figure BDA0003781883980000025

其中,

Figure BDA0003781883980000026
为超像素块smax中像素点的个数,I(x)为像素块smax中像素点x的像素值。in,
Figure BDA0003781883980000026
is the number of pixels in the superpixel block smax, and I(x) is the pixel value of the pixel x in the pixel block smax .

所述步骤3)具体为:Described step 3) is specifically:

3.1)通过原始图像I1的L通道,计算自适应置信度t*(x):3.1) Through the L channel of the original image I 1 , calculate the adaptive confidence t * (x):

Figure BDA0003781883980000031
Figure BDA0003781883980000031

其中,Ω为最小值滤波区间,p为置信度调节参数,L(y)为区域Ω中像素点y的L通道值;Among them, Ω is the minimum filter interval, p is the confidence adjustment parameter, and L(y) is the L channel value of the pixel point y in the region Ω;

2)根据原始图像I1的暗通道图Id和全局大气光估计值A0计算图像的粗略透射率特征图t0,计算公式如下:2) Calculate the rough transmittance characteristic map t 0 of the image according to the dark channel map I d of the original image I 1 and the estimated global atmospheric light value A 0 , and the calculation formula is as follows:

Figure BDA0003781883980000032
Figure BDA0003781883980000032

3)运用双边加权引导滤波对t0优化,得到图像透射率t,计算公式如下:3) Use bilateral weighted guided filtering to optimize t 0 to obtain the image transmittance t. The calculation formula is as follows:

t=a*Ig+bt=a* Ig +b

其中,Ig为原始图像I的灰度图,a、b为引导滤波系数,计算公式如下:Among them, I g is the grayscale image of the original image I, a and b are the guided filter coefficients, and the calculation formula is as follows:

Figure BDA0003781883980000033
Figure BDA0003781883980000033

Figure BDA0003781883980000034
Figure BDA0003781883980000034

其中,ε为容差因子,d为滤波区间,ω(i,j,k,l)为滤波加权系数,(k,l)表示滤波窗口中心坐标,(i,j)表示窗口其他坐标,ωm为核函数,m取值为{1,2,3,4},计算公式如下:Among them, ε is the tolerance factor, d is the filter interval, ω(i,j,k,l) is the filter weighting coefficient, (k,l) represents the center coordinate of the filter window, (i,j) represents the other coordinates of the window, ω m is the kernel function, and the value of m is {1, 2, 3, 4}. The calculation formula is as follows:

Figure BDA0003781883980000035
Figure BDA0003781883980000035

其中,σd为空域权重,σr为值域权重;Among them, σ d is the air domain weight, σ r is the value domain weight;

其中,Im(i,j),Im(k,l),m∈{1,2,3,4}表示对应图像在点(i,j),(k,l)的像素值,计算公式如下:Among them, I m (i,j), I m (k,l), m∈{1,2,3,4} represents the pixel value of the corresponding image at points (i,j), (k,l), calculate The formula is as follows:

Figure BDA0003781883980000036
Figure BDA0003781883980000036

I2=t0 I 2 =t 0

I3=Ig I 3 =I g

Figure BDA0003781883980000037
Figure BDA0003781883980000037

其中,

Figure BDA0003781883980000038
表示矩阵哈达马积。in,
Figure BDA0003781883980000038
represents the matrix Hadamard product.

所述步骤4)中,图像恢复模块输出最终清晰图像J,表示为:In the described step 4), the image restoration module outputs the final clear image J, which is expressed as:

Figure BDA0003781883980000041
Figure BDA0003781883980000041

本发明的有益效果:Beneficial effects of the present invention:

本发明提出的一种用于救援机器人的火灾场景低质量图像恢复方法通过区域阈值分割算法分割了火焰区域,避免火焰光源对全局大气光估计的失真影响,设计大气光检测算子,基于超像素块得到准确的大气光参数,解决了全局大气光估计失真问题;构建透射率估计优化模块,基于双边加权引导滤波方法实现了透射率的精细化,解决了通用方法引起的光晕等问题。本发明提出的方法有助于提高火灾现场图像恢复的清晰化程度,提升救援机器人的现场检测、识别、环境建图与路径规划提供了数据基础。A low-quality image restoration method for a fire scene for rescue robots proposed by the present invention divides the flame region through a region threshold segmentation algorithm, so as to avoid the distorted influence of the flame light source on the global atmospheric light estimation, and design an atmospheric light detection operator based on superpixels. The accurate atmospheric light parameters can be obtained from the block, which solves the problem of global atmospheric light estimation distortion; an optimization module for transmittance estimation is constructed, and the refinement of transmittance is realized based on the bilateral weighted guided filtering method, and the problems such as halo caused by the general method are solved. The method proposed by the invention helps to improve the clarity of fire scene image recovery, and provides a data basis for improving the scene detection, identification, environment mapping and path planning of rescue robots.

附图说明Description of drawings

图1为救援火灾现场图像清晰化流程图;Figure 1 is a flow chart of image clarity at the rescue fire scene;

图2为本发明实施例中的原始图像;Fig. 2 is the original image in the embodiment of the present invention;

图3为火光区域分割模块和过二值化处理后的火光区域图;Fig. 3 is a fire light area segmentation module and a fire light area diagram after binarization;

图4为粗略透射率估计图;Fig. 4 is a rough transmittance estimation diagram;

图5为经双边加权引导滤波处理后的精细透射率估计图。FIG. 5 is a fine transmittance estimation diagram after bilateral weighted guided filtering processing.

具体实施方式Detailed ways

下面结合附图和实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.

1、本发明包括救援火灾现场图像清晰化系统。包括区域大气光估计模块、透射率估计优化模块和图像重建恢复模块。区域大气光估计模块包括火光区域大气光估计模块和全局大气光估计模块。1. The present invention includes an image clearing system for rescue fire scene. Including regional atmospheric light estimation module, transmittance estimation optimization module and image reconstruction recovery module. The regional atmospheric light estimation module includes a fire regional atmospheric light estimation module and a global atmospheric light estimation module.

2、如图1所示,输入的原始图像I通过区域大气光估计模块估计大气光,设计区域阈值分割算法分割火焰区域,进一步对图像超像素分割并设计大气光检测算子,得到全局大气光;构建透射率估计优化模块,估计粗略透射率,进一步基于双边加权引导滤波方法计算精细化的透射率;输入图像恢复模块恢复得到高质量图片。2. As shown in Figure 1, the input original image I estimates the atmospheric light through the regional atmospheric light estimation module, designs the regional threshold segmentation algorithm to segment the flame area, further divides the image superpixels and designs the atmospheric light detection operator to obtain the global atmospheric light. ; Build a transmittance estimation and optimization module, estimate the rough transmittance, and further calculate the refined transmittance based on the bilateral weighted guided filtering method; The input image restoration module restores high-quality pictures.

3、火光区域大气光估计模块对图2所示的原始图像I区域分割为火光区域图像IF和非火光区域图像INF3. The atmospheric light estimation module in the flare area divides the area I of the original image shown in FIG. 2 into a flare area image IF and a non-fire area image I NF .

具体包括:Specifically include:

步骤1:将原始图像I结合RGB色彩空间判据及HLS色彩空间判据分割火光区域,得到区域图像I1Step 1: Combine the original image I with the RGB color space criterion and the HLS color space criterion to divide the fire area to obtain an area image I 1 .

RGB色彩空间判据为:The RGB color space criterion is:

R>RT R> RT

R≥G≥BR≥G≥B

HLS色彩空间判据为:The HLS color space criterion is:

Figure BDA0003781883980000051
Figure BDA0003781883980000051

Lmin≤L≤Lmax L min ≤L≤L max

R,G,B分别为图像RGB色彩空间的红、绿、蓝分量,RT为红色阈值,S,L分别为图像HLS色彩空间的饱和度、亮度分量,Lmin为亮度最小阈值,Lmax为亮度最大阈值。R, G, B are the red, green and blue components of the image RGB color space, R T is the red threshold, S, L are the saturation and luminance components of the image HLS color space, L min is the minimum luminance threshold, L max is the maximum brightness threshold.

步骤2:将区域图像I1执行先开后闭的形态学处理,删除图像中的孤立点并填充目标区域内部的孔洞,进一步对图像形态学处理后的结果高斯滤波,得到图3所示的火光区域图像IF,非火光区域图像INFStep 2: Perform the morphological processing of opening first and closing later on the region image I 1 , delete the isolated points in the image and fill the holes inside the target area, and further perform Gauss filtering on the result of the image morphological processing to obtain the image shown in Figure 3. The flare area image IF and the non-flare area image I NF .

4、全局大气光估计模块通过构建大气光检测算子计算全局大气光值A04. The global atmospheric light estimation module calculates the global atmospheric light value A 0 by constructing an atmospheric light detection operator.

具体包括:Specifically include:

步骤一:输入图像I的R,G,B三通道进行最小滤波,得到图像的暗通道图IdStep 1: the R, G, B three channels of the input image I carry out minimum filtering to obtain the dark channel map I d of the image;

步骤二:输入图像I,设计大气光检测算子score:Step 2: Input image I and design the atmospheric light detection operator score:

score=(1-S)Id score=(1-S) Id

其中,S为图像I的饱和度分量。where S is the saturation component of image I.

步骤三:非火光区域图像INF超像素分割,得到分割图Is,计算每一个si∈Is超像素块的大气光检测算子得分

Figure BDA0003781883980000052
Step 3: Segment the non-fire area image I NF superpixels, obtain the segmentation map I s , and calculate the atmospheric light detection operator score of each si ∈ I s superpixel block
Figure BDA0003781883980000052

Figure BDA0003781883980000053
Figure BDA0003781883980000053

其中,

Figure BDA0003781883980000054
为超像素块si像素点个数。in,
Figure BDA0003781883980000054
is the number of pixels in the superpixel block si .

步骤四:将

Figure BDA0003781883980000055
值按由大到小的顺序进行排序,选取score值最大的超像素块smax将该超像素中所有像素点的平均值计算大气光最大值的估计值A0。Step 4: Put the
Figure BDA0003781883980000055
The values are sorted in descending order, and the superpixel block s max with the largest score value is selected to calculate the estimated value A 0 of the maximum atmospheric light value from the average value of all pixels in the superpixel.

Figure BDA0003781883980000056
Figure BDA0003781883980000056

其中,

Figure BDA0003781883980000057
为超像素块smax像素点个数。in,
Figure BDA0003781883980000057
is the number of pixels in the superpixel block s max .

5、透射率估计优化模块,输入暗通道图及全局大气光估计值估计粗略透射率,由双边加权引导滤波透射率优化精确透射率t。5. The transmittance estimation optimization module, input the dark channel map and the global atmospheric light estimation value to estimate the rough transmittance, and optimize the precise transmittance t by the bilateral weighted guided filter transmittance.

具体包括:Specifically include:

步骤一:通过图像的L通道,计算自适应置信度t*(x):Step 1: Calculate the adaptive confidence t * (x) through the L channel of the image:

Figure BDA0003781883980000061
Figure BDA0003781883980000061

其中,Ω为最小值滤波区间,p为置信度调节参数:Among them, Ω is the minimum filter interval, and p is the confidence adjustment parameter:

步骤二:如图4所示,计算图像的粗略透射率特征图t0,计算公式如下:Step 2: As shown in Figure 4, the rough transmittance characteristic map t 0 of the image is calculated, and the calculation formula is as follows:

Figure BDA0003781883980000062
Figure BDA0003781883980000062

步骤三:如图5所示,运用双边加权引导滤波对t0优化,估计图像精确透射率t。计算公式如下:Step 3: As shown in Figure 5, use bilateral weighted guided filtering to optimize t 0 to estimate the accurate transmittance t of the image. Calculated as follows:

t=a*Ig+bt=a* Ig +b

a,b为双边加权引导系数,计算公式如下:a, b are bilateral weighted guidance coefficients, and the calculation formula is as follows:

Figure BDA0003781883980000063
Figure BDA0003781883980000063

Figure BDA0003781883980000064
Figure BDA0003781883980000064

其中,ε为容差因子,d为滤波区间,为ω(i,j,k,l)为滤波加权系数,(k,l)表示窗口中心坐标,(i,j)表示窗口其他坐标,ωm为核函数,m取值为{1,2,3,4},计算公式如下:Among them, ε is the tolerance factor, d is the filter interval, ω(i,j,k,l) is the filter weighting coefficient, (k,l) represents the center coordinate of the window, (i,j) represents the other coordinates of the window, ω m is the kernel function, and the value of m is {1, 2, 3, 4}. The calculation formula is as follows:

Figure BDA0003781883980000065
Figure BDA0003781883980000065

其中,σd空域权重,σr为值域权重。Among them, σ d is the air domain weight, and σ r is the value domain weight.

其中,Im(i,j),Im(k,l)表示对应图像该点的像素值,计算公式如下:Among them, Im (i, j), Im (k, l) represent the pixel value of the corresponding image point, and the calculation formula is as follows:

Figure BDA0003781883980000066
Figure BDA0003781883980000066

I2=t0 I 2 =t 0

I3=IgI 3 =Ig

Figure BDA0003781883980000067
Figure BDA0003781883980000067

Figure BDA0003781883980000068
表示矩阵哈达马积。
Figure BDA0003781883980000068
represents the matrix Hadamard product.

6、图像恢复模块输出最终清晰图J,表示为:6. The image restoration module outputs the final clear image J, which is expressed as:

Figure BDA0003781883980000069
Figure BDA0003781883980000069

与其他方法对比,本发明的优点为对火灾场景低质量图像恢复质量高,同时仅占用极少的计算资源,有效应用于救援机器人火灾现场检测、识别等的图像前处理工作。Compared with other methods, the present invention has the advantages of high recovery quality for low-quality images of fire scenes, while only occupying very few computing resources, and is effectively applied to image preprocessing work such as fire scene detection and recognition of rescue robots.

Claims (6)

1. A fire scene low-quality image recovery system for a rescue robot is characterized by comprising a regional atmosphere light estimation module, a transmissivity estimation optimization module and an image reconstruction recovery module, wherein the regional atmosphere light estimation module comprises a fire region atmosphere light estimation module and a global atmosphere light estimation module;
the device comprises a flare region atmospheric light estimation module, a flare region atmospheric light estimation module and a flare region atmospheric light estimation module, wherein the flare region atmospheric light estimation module is used for dividing an original image into a flare region image and a non-flare region image;
the global atmospheric light estimation module is used for carrying out superpixel segmentation on the image and designing an atmospheric light detection operator to obtain an estimated value of global atmospheric light;
the transmissivity estimation optimization module is used for estimating rough transmissivity and calculating accurate transmissivity based on a bilateral weighting guide filtering method;
and the image reconstruction and recovery module is used for reconstructing and recovering a clear image.
2. The fire scene low-quality image restoration method for the rescue robot using the system of claim 1,
step 1: dividing an original image I into a flare region image I by a flare region atmospheric light estimation module F And image of non-flare area I NF
Step 2: the global atmospheric light estimation module calculates a global atmospheric light estimation value A by constructing an atmospheric light detection operator 0
And 3, step 3: inputting the dark channel map and the global atmospheric light estimated value into a transmissivity estimation optimization module to estimate rough transmissivity, and applying bilateral weighting to guide filtering transmissivity optimization to obtain accurate transmissivity t;
and 4, step 4: and recovering the image through an image reconstruction and recovery module to obtain a final clear image.
3. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein the step 1) is specifically as follows:
1.1 ) combining RGB color space criterion and HLS color space criterion to divide original image I to obtain initial flare area image I 1
1.2 Image I of the initial flare area 1 Performing a first-to-open and then-to-close morphological processing, deleting isolated points in the image and filling in the region image I 1 Further carrying out Gaussian filtering on the morphologically processed image by using the internal holes to obtain a final image I of the flare region F (ii) a And according to the original image I and the image I of the flare area F Obtaining images of non-bright areas I NF
4. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein the step 2) is specifically as follows:
2.1 Carrying out minimum value filtering on three channels R, G and B of an original image I to obtain a dark channel image I of the image d
2.2 For the original image I), the atmospheric light detection operator score is constructed:
score=(1-S)I d
wherein S is a saturation component of the original image I;
2.3 For image I of non-flare area NF Performing superpixel segmentation to obtain a segmentation map I s Calculating a segmentation map I s Each super pixel block s i Atmospheric light detection operator score of
Figure FDA0003781883970000021
Figure FDA0003781883970000022
Wherein,
Figure FDA0003781883970000023
is a super pixel block s i The number of the middle pixel points, x is a superpixel block s i The pixel point in (1);
2.4 Each superpixel block is corresponded with
Figure FDA0003781883970000024
Sorting the values in the order from big to small, selecting the superpixel block with the largest atmospheric light detection operator score, and recording as s max Calculating a superpixel block s max Obtaining an estimated value A of the global atmospheric light by averaging the pixel values of all the pixel points 0
Figure FDA0003781883970000025
Wherein,
Figure FDA0003781883970000026
is a super pixel block s max The number of middle pixel points, I (x) is the pixel block s max The pixel value of the middle pixel point x.
5. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein the step 3) is specifically as follows:
3.1 Through the original image I) 1 L channel of (d), calculating an adaptive confidence t * (x):
Figure FDA0003781883970000027
Wherein Ω is a minimum value filtering interval, p is a confidence coefficient adjusting parameter, and L (y) is an L channel value of a pixel point y in the region Ω;
2) From the original image I 1 Dark channel diagram I d And global atmospheric light estimate A 0 Calculating a rough transmittance profile t of the image 0 The calculation formula is as follows:
Figure FDA0003781883970000028
3) Applying a bilateral weighted guided filtering pair t 0 Optimizing to obtain the image transmissivity t, wherein the calculation formula is as follows:
t=a*I g +b
Wherein, I g The gray scale map of the original image I, a and b are guide filter coefficients, and the calculation formula is as follows:
Figure FDA0003781883970000031
Figure FDA0003781883970000032
where ε is a tolerance factor, d is a filter interval, ω (i, j, k, l) is a filter weighting coefficient, (k, l) represents the center coordinates of the filter window, (i, j) represents the other coordinates of the window, ω m For the kernel function, m takes the value {1,2,3,4}, and the calculation formula is as follows:
Figure FDA0003781883970000033
wherein σ d Is a spatial weight, σ r Is a value range weight;
wherein, I m (i,j),I m (k, l), ε ∈ {1,2,3,4} represents the pixel value of the corresponding image at point (i, j), (k, l), and the calculation formula is as follows:
Figure FDA0003781883970000034
I 2 =t 0
I 3 =I g
Figure FDA0003781883970000035
wherein,
Figure FDA0003781883970000036
representing the matrix hadamard product.
6. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein in the step 4), the image recovery module outputs a final clear image J represented as:
Figure FDA0003781883970000037
CN202210932068.8A 2022-08-04 2022-08-04 A fire scene low-quality image restoration method for rescue robots Active CN115170437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210932068.8A CN115170437B (en) 2022-08-04 2022-08-04 A fire scene low-quality image restoration method for rescue robots

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210932068.8A CN115170437B (en) 2022-08-04 2022-08-04 A fire scene low-quality image restoration method for rescue robots

Publications (2)

Publication Number Publication Date
CN115170437A true CN115170437A (en) 2022-10-11
CN115170437B CN115170437B (en) 2025-05-09

Family

ID=83476983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210932068.8A Active CN115170437B (en) 2022-08-04 2022-08-04 A fire scene low-quality image restoration method for rescue robots

Country Status (1)

Country Link
CN (1) CN115170437B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309607A (en) * 2023-05-25 2023-06-23 山东航宇游艇发展有限公司 Ship type intelligent water rescue platform based on machine vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767354A (en) * 2017-12-08 2018-03-06 福州大学 A kind of image defogging algorithm based on dark primary priori
US20200394767A1 (en) * 2019-06-17 2020-12-17 China University Of Mining & Technology, Beijing Method for rapidly dehazing underground pipeline image based on dark channel prior

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767354A (en) * 2017-12-08 2018-03-06 福州大学 A kind of image defogging algorithm based on dark primary priori
US20200394767A1 (en) * 2019-06-17 2020-12-17 China University Of Mining & Technology, Beijing Method for rapidly dehazing underground pipeline image based on dark channel prior

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴敬理等: "高温混合障碍空间中的移动机器人路径规划", 浙江大学学报(工学版), vol. 55, no. 10, 31 October 2021 (2021-10-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309607A (en) * 2023-05-25 2023-06-23 山东航宇游艇发展有限公司 Ship type intelligent water rescue platform based on machine vision
CN116309607B (en) * 2023-05-25 2023-07-28 山东航宇游艇发展有限公司 Ship type intelligent water rescue platform based on machine vision

Also Published As

Publication number Publication date
CN115170437B (en) 2025-05-09

Similar Documents

Publication Publication Date Title
CN106530246B (en) Image defogging method and system based on dark Yu non local priori
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
CN101783012B (en) An Automatic Image Dehazing Method Based on Dark Channel Color
CN108564597B (en) Video foreground object extraction method fusing Gaussian mixture model and H-S optical flow method
CN111861896A (en) A UUV-Oriented Color Compensation and Restoration Method for Underwater Images
CN113256533B (en) Self-adaptive low-illumination image enhancement method and system based on MSRCR
CN109087254B (en) Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method
CN106875351A (en) A kind of defogging method towards large area sky areas image
CN104318524A (en) Method, device and system for image enhancement based on YCbCr color space
CN105550999A (en) Video image enhancement processing method based on background reuse
CN104809709A (en) Single-image self-adaptation defogging method based on domain transformation and weighted quadtree decomposition
CN114219732A (en) Image defogging method and system based on sky region segmentation and transmissivity refinement
Yu et al. Image and video dehazing using view-based cluster segmentation
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN114677289A (en) An image dehazing method, system, computer equipment, storage medium and terminal
CN111462002A (en) Underwater image enhancement and restoration method based on convolutional neural network
CN108564538A (en) Image haze removing method and system based on ambient light difference
CN110544216A (en) Deep Learning-Based Video Dehazing System
CN111325688A (en) Unmanned aerial vehicle image defogging method fusing morphological clustering and optimizing atmospheric light
CN108133462A (en) A kind of restored method of the single image based on gradient fields region segmentation
CN111598788B (en) Single image defogging method based on quadtree decomposition and non-local prior
CN114693548B (en) Dark channel defogging method based on bright area detection
CN109949239B (en) An adaptive sharpening method for multi-density and multi-scene haze images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant