CN115170437A - Fire scene low-quality image recovery method for rescue robot - Google Patents
Fire scene low-quality image recovery method for rescue robot Download PDFInfo
- Publication number
- CN115170437A CN115170437A CN202210932068.8A CN202210932068A CN115170437A CN 115170437 A CN115170437 A CN 115170437A CN 202210932068 A CN202210932068 A CN 202210932068A CN 115170437 A CN115170437 A CN 115170437A
- Authority
- CN
- China
- Prior art keywords
- image
- atmospheric light
- transmissivity
- flare
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000011084 recovery Methods 0.000 title claims abstract description 21
- 238000001914 filtration Methods 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims abstract description 15
- 230000002146 bilateral effect Effects 0.000 claims abstract description 13
- 238000005457 optimization Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000002834 transmittance Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000013507 mapping Methods 0.000 abstract description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 239000000779 smoke Substances 0.000 description 2
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides a fire scene low-quality image recovery method for a rescue robot. According to the method, the flame region is segmented based on the region threshold segmentation algorithm, so that the influence of a flame light source on the global atmospheric light estimation is avoided; an atmospheric light detection operator is designed, accurate atmospheric light parameters are obtained based on the super-pixel block, and the problem of global atmospheric light estimation distortion is solved; a transmissivity estimation optimization module is constructed, the transmissivity is refined based on a bilateral weighting guide filtering method, and the problems of halation and the like caused by the traditional method are solved. The method provided by the invention is beneficial to improving the clarity of the fire scene image recovery, and provides a data basis for improving the scene detection, identification, environment mapping and path planning of the rescue robot.
Description
Technical Field
The invention belongs to the field of rescue robot image processing, and particularly relates to a fire scene low-quality image recovery method for a rescue robot.
Background
In a rescue fire environment, combustion is usually accompanied by uneven fire and smoke, and when a rescue robot in such a scene executes a task, collected images are usually affected by environmental factors such as light, flame, smoke and the like.
Due to the fact that the image in the fire scene is uneven in environmental light distribution and uneven in fog concentration, the existing method has the problem of estimation distortion when the situation of the fire scene is processed. Through research on a rescue fire environment low-quality image sharpening method, the low-quality degraded image is restored to a high-quality image, and a data basis is provided for field detection, identification, environment mapping and path planning of the rescue robot.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a fire scene low-quality image recovery method for a rescue robot.
The technical scheme adopted by the invention is as follows:
1. fire scene low-quality image recovery system for rescue robot
The device comprises a regional atmosphere light estimation module, a transmissivity estimation optimization module and an image reconstruction recovery module, wherein the regional atmosphere light estimation module comprises a flare regional atmosphere light estimation module and a global atmosphere light estimation module;
the device comprises a flare region atmospheric light estimation module, a flare region atmospheric light estimation module and a flare region atmospheric light estimation module, wherein the flare region atmospheric light estimation module is used for dividing an original image into a flare region image and a non-flare region image;
the global atmospheric light estimation module is used for performing super-pixel segmentation on the image of the non-flare region and designing an atmospheric light detection operator to obtain an estimated value of global atmospheric light;
the transmissivity estimation optimization module is used for estimating the rough transmissivity and calculating the accurate transmissivity by applying a bilateral weighting guided filtering method;
and the image reconstruction and recovery module is used for reconstructing and recovering a clear image.
2. Fire scene low-quality image recovery method for rescue robot by adopting system
Step 1: passing through atmosphere in the flare regionThe light estimation module divides the original image I into a flare region image I F And image of non-flare area I NF ;
Step 2: the global atmosphere light estimation module calculates a global atmosphere light estimation value A by constructing an atmosphere light detection operator 0 ;
And step 3: inputting the dark channel map and the global atmospheric light estimated value into a transmissivity estimation optimization module to estimate rough transmissivity, and obtaining accurate transmissivity t by applying bilateral weighted filtering transmissivity optimization;
and 4, step 4: and recovering the image through an image reconstruction and recovery module to obtain a final clear image.
The step 1) is specifically as follows:
1.1 ) combining RGB color space criterion and HLS color space criterion to divide original image I to obtain initial flare area image I 1 ;
1.2 Image of the initial flare area I) 1 Performing first-to-open and second-to-close morphological processing, deleting isolated points in the image and filling the region image I 1 Further carrying out Gaussian filtering on the morphologically processed image by using the internal holes to obtain a final image I of the flare region F (ii) a And according to the original image I and the image I of the flare area F Obtaining images of non-bright areas I NF 。
The step 2) is specifically as follows:
2.1 Carrying out minimum value filtering on three channels R, G and B of an original image I to obtain a dark channel image I of the image d ;
2.2 For the original image I, the atmospheric light detection operator score is constructed:
score=(1-S)I d
wherein S is a saturation component of the original image I;
2.3 For image I of non-blazed area NF Performing superpixel segmentation to obtain a segmentation map I s Calculating a segmentation map I s (s i ∈I s ) Of each super-pixel block s i Atmospheric light detection operator score of
Wherein,is a super pixel block s i The number of the middle pixel points, x is a superpixel block s i The pixel point in (2);
2.4 Each superpixel block is corresponded withSorting the values in the order from big to small, selecting the superpixel block with the largest atmospheric light detection operator score, and recording as s max Calculating a superpixel block s max Obtaining an estimated value A of the global atmospheric light by averaging the pixel values of all the pixel points 0 :
Wherein,is a super pixel block s max The number of middle pixel points, I (x) is the pixel block s max The pixel value of the middle pixel point x.
The step 3) is specifically as follows:
3.1 Through the original image I) 1 L channel of (d), calculating an adaptive confidence t * (x):
Wherein Ω is a minimum value filtering interval, p is a confidence coefficient adjusting parameter, and L (y) is an L channel value of a pixel point y in the region Ω;
2) From the original image I 1 Dark channel diagram I d And global atmospheric light estimate A 0 Calculation chartRough transmittance profile t of image 0 The calculation formula is as follows:
3) Applying a bilateral weighted guided filtering pair t 0 Optimizing to obtain the image transmittance t, wherein the calculation formula is as follows:
t=a*I g +b
wherein, I g The gray scale map of the original image I is shown, a and b are guide filter coefficients, and the calculation formula is as follows:
wherein epsilon is a tolerance factor, d is a filtering interval, omega (i, j, k, l) is a filtering weighting coefficient, (k, l) represents the central coordinate of a filtering window, (i, j) represents other coordinates of the window, and omega (l, l) represents other coordinates of the window m For the kernel function, m takes the value {1,2,3,4}, and the calculation formula is as follows:
wherein σ d Is a spatial weight, σ r Is a value range weight;
wherein, I m (i,j),I m (k, l), m ∈ {1,2,3,4} representing the pixel value of the corresponding image at points (i, j), (k, l), the calculation formula is as follows:
I 2 =t 0
I 3 =I g
In the step 4), the image restoration module outputs a final clear image J, which is represented as:
the invention has the beneficial effects that:
according to the fire scene low-quality image recovery method for the rescue robot, the flame region is segmented through the region threshold segmentation algorithm, the distortion influence of a flame light source on overall atmospheric light estimation is avoided, the atmospheric light detection operator is designed, accurate atmospheric light parameters are obtained based on the super-pixel block, and the problem of overall atmospheric light estimation distortion is solved; a transmissivity estimation optimization module is constructed, the transmissivity is refined based on a bilateral weighting guide filtering method, and the problems of halo and the like caused by a general method are solved. The method provided by the invention is beneficial to improving the clarity of the fire scene image recovery, and provides a data basis for improving the scene detection, identification, environment mapping and path planning of the rescue robot.
Drawings
FIG. 1 is a flow chart of a scene of a rescue fire for image clarity;
FIG. 2 is an original image in an embodiment of the present invention;
FIG. 3 is a diagram of a flare region segmentation module and a flare region after binarization processing;
FIG. 4 is a graph of a coarse transmittance estimate;
FIG. 5 is a graph of fine transmittance estimates after bilateral weighted guided filtering.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the drawings.
1. The invention comprises a scene image clearing system for rescuing the fire. The device comprises a regional atmosphere light estimation module, a transmissivity estimation optimization module and an image reconstruction recovery module. The regional atmosphere light estimation module comprises a flare regional atmosphere light estimation module and a global atmosphere light estimation module.
2. As shown in fig. 1, an input original image I estimates atmospheric light through a regional atmospheric light estimation module, a region threshold segmentation algorithm is designed to segment a flame region, and further, an image superpixel segmentation is performed and an atmospheric light detection operator is designed to obtain global atmospheric light; a transmissivity estimation optimization module is constructed, rough transmissivity is estimated, and refined transmissivity is further calculated based on a bilateral weighting guide filtering method; and the input image recovery module recovers to obtain a high-quality picture.
3. The flare region atmosphere light estimation module divides the original image I region shown in FIG. 2 into flare region images I F And image of non-flare area I NF 。
The method specifically comprises the following steps:
step 1: dividing the original image I into fire regions by combining RGB color space criterion and HLS color space criterion to obtain a region image I 1 。
The RGB color space criterion is:
R>R T
R≥G≥B
the HLS color space criterion is:
L min ≤L≤L max
r, G, B are the red, green, blue components of the RGB color space of the image, R T Is a red threshold, S, L are the saturation and brightness components of the HLS color space of the image, L min Is a minimum threshold value of brightness, L max Is the brightness maximum threshold.
Step 2: region image I 1 Perform first-comeMorphological processing of post-closure, namely deleting isolated points in the image, filling holes in the target area, further carrying out Gaussian filtering on the result of the morphological processing of the image to obtain a flare area image I shown in figure 3 F Image of non-flare area I NF 。
4. The global atmospheric light estimation module calculates a global atmospheric light value A by constructing an atmospheric light detection operator 0 。
The method specifically comprises the following steps:
the method comprises the following steps: performing minimum filtering on three channels R, G and B of the input image I to obtain a dark channel image I of the image d ;
Step two: input image I, design atmospheric light detection operator score:
score=(1-S)I d
where S is the saturation component of image I.
Step three: image of non-flare area I NF Super-pixel segmentation to obtain a segmentation map I s Calculate each s i ∈I s Atmospheric light detection operator score for superpixel blocks
Step four: will be provided withThe values are sorted from big to small, and a super pixel block s with the largest score value is selected max Calculating the average value of all pixel points in the super pixel to obtain the estimated value A of the maximum value of atmospheric light 0 。
5. And the transmissivity estimation optimization module is used for inputting the dark channel map and the global atmospheric light estimation value to estimate the rough transmissivity and optimizing the precise transmissivity t by using bilateral weighting guide filtering transmissivity.
The method specifically comprises the following steps:
the method comprises the following steps: calculating the self-adaptive confidence coefficient t through the L channel of the image * (x):
Wherein Ω is a minimum value filtering interval, and p is a confidence coefficient adjusting parameter:
step two: as shown in FIG. 4, a rough transmittance profile t of the image is calculated 0 The calculation formula is as follows:
step three: as shown in FIG. 5, a bilateral weighted guided filtering pair t is applied 0 And optimizing and estimating the image accurate transmittance t. The calculation formula is as follows:
t=a*I g +b
a and b are bilateral weighted guide coefficients, and the calculation formula is as follows:
whereinε is a tolerance factor, d is a filter interval, ω (i, j, k, l) is a filter weighting factor, (k, l) represents the window center coordinate, (i, j) represents the other coordinates of the window, ω m For the kernel function, m takes the value {1,2,3,4}, and the calculation formula is as follows:
wherein σ d Spatial weight, σ r Is a value range weight.
Wherein, I m (i,j),I m (k, l) represents the pixel value corresponding to the point of the image, and the calculation formula is as follows:
I 2 =t 0
I 3 =Ig
6. The image restoration module outputs a final sharpness map J, represented as:
compared with other methods, the method has the advantages of high recovery quality of low-quality images in fire scenes, only few computing resources occupied, and effective application to image preprocessing work such as detection and identification of fire sites of the rescue robot.
Claims (6)
1. A fire scene low-quality image recovery system for a rescue robot is characterized by comprising a regional atmosphere light estimation module, a transmissivity estimation optimization module and an image reconstruction recovery module, wherein the regional atmosphere light estimation module comprises a fire region atmosphere light estimation module and a global atmosphere light estimation module;
the device comprises a flare region atmospheric light estimation module, a flare region atmospheric light estimation module and a flare region atmospheric light estimation module, wherein the flare region atmospheric light estimation module is used for dividing an original image into a flare region image and a non-flare region image;
the global atmospheric light estimation module is used for carrying out superpixel segmentation on the image and designing an atmospheric light detection operator to obtain an estimated value of global atmospheric light;
the transmissivity estimation optimization module is used for estimating rough transmissivity and calculating accurate transmissivity based on a bilateral weighting guide filtering method;
and the image reconstruction and recovery module is used for reconstructing and recovering a clear image.
2. The fire scene low-quality image restoration method for the rescue robot using the system of claim 1,
step 1: dividing an original image I into a flare region image I by a flare region atmospheric light estimation module F And image of non-flare area I NF ;
Step 2: the global atmospheric light estimation module calculates a global atmospheric light estimation value A by constructing an atmospheric light detection operator 0 ;
And 3, step 3: inputting the dark channel map and the global atmospheric light estimated value into a transmissivity estimation optimization module to estimate rough transmissivity, and applying bilateral weighting to guide filtering transmissivity optimization to obtain accurate transmissivity t;
and 4, step 4: and recovering the image through an image reconstruction and recovery module to obtain a final clear image.
3. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein the step 1) is specifically as follows:
1.1 ) combining RGB color space criterion and HLS color space criterion to divide original image I to obtain initial flare area image I 1 ;
1.2 Image I of the initial flare area 1 Performing a first-to-open and then-to-close morphological processing, deleting isolated points in the image and filling in the region image I 1 Further carrying out Gaussian filtering on the morphologically processed image by using the internal holes to obtain a final image I of the flare region F (ii) a And according to the original image I and the image I of the flare area F Obtaining images of non-bright areas I NF 。
4. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein the step 2) is specifically as follows:
2.1 Carrying out minimum value filtering on three channels R, G and B of an original image I to obtain a dark channel image I of the image d ;
2.2 For the original image I), the atmospheric light detection operator score is constructed:
score=(1-S)I d
wherein S is a saturation component of the original image I;
2.3 For image I of non-flare area NF Performing superpixel segmentation to obtain a segmentation map I s Calculating a segmentation map I s Each super pixel block s i Atmospheric light detection operator score of
Wherein,is a super pixel block s i The number of the middle pixel points, x is a superpixel block s i The pixel point in (1);
2.4 Each superpixel block is corresponded withSorting the values in the order from big to small, selecting the superpixel block with the largest atmospheric light detection operator score, and recording as s max Calculating a superpixel block s max Obtaining an estimated value A of the global atmospheric light by averaging the pixel values of all the pixel points 0 :
5. The method for recovering the low-quality image of the fire scene of the rescue robot as recited in claim 2, wherein the step 3) is specifically as follows:
3.1 Through the original image I) 1 L channel of (d), calculating an adaptive confidence t * (x):
Wherein Ω is a minimum value filtering interval, p is a confidence coefficient adjusting parameter, and L (y) is an L channel value of a pixel point y in the region Ω;
2) From the original image I 1 Dark channel diagram I d And global atmospheric light estimate A 0 Calculating a rough transmittance profile t of the image 0 The calculation formula is as follows:
3) Applying a bilateral weighted guided filtering pair t 0 Optimizing to obtain the image transmissivity t, wherein the calculation formula is as follows:
t=a*I g +b
Wherein, I g The gray scale map of the original image I, a and b are guide filter coefficients, and the calculation formula is as follows:
where ε is a tolerance factor, d is a filter interval, ω (i, j, k, l) is a filter weighting coefficient, (k, l) represents the center coordinates of the filter window, (i, j) represents the other coordinates of the window, ω m For the kernel function, m takes the value {1,2,3,4}, and the calculation formula is as follows:
wherein σ d Is a spatial weight, σ r Is a value range weight;
wherein, I m (i,j),I m (k, l), ε ∈ {1,2,3,4} represents the pixel value of the corresponding image at point (i, j), (k, l), and the calculation formula is as follows:
I 2 =t 0
I 3 =I g
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210932068.8A CN115170437A (en) | 2022-08-04 | 2022-08-04 | Fire scene low-quality image recovery method for rescue robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210932068.8A CN115170437A (en) | 2022-08-04 | 2022-08-04 | Fire scene low-quality image recovery method for rescue robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115170437A true CN115170437A (en) | 2022-10-11 |
Family
ID=83476983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210932068.8A Pending CN115170437A (en) | 2022-08-04 | 2022-08-04 | Fire scene low-quality image recovery method for rescue robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115170437A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116309607A (en) * | 2023-05-25 | 2023-06-23 | 山东航宇游艇发展有限公司 | Ship type intelligent water rescue platform based on machine vision |
-
2022
- 2022-08-04 CN CN202210932068.8A patent/CN115170437A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116309607A (en) * | 2023-05-25 | 2023-06-23 | 山东航宇游艇发展有限公司 | Ship type intelligent water rescue platform based on machine vision |
CN116309607B (en) * | 2023-05-25 | 2023-07-28 | 山东航宇游艇发展有限公司 | Ship type intelligent water rescue platform based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106157267B (en) | Image defogging transmissivity optimization method based on dark channel prior | |
CN104794688B (en) | Single image to the fog method and device based on depth information separation sky areas | |
CN106846263B (en) | Based on the image defogging method for merging channel and sky being immunized | |
CN107301623B (en) | Traffic image defogging method and system based on dark channel and image segmentation | |
Gao et al. | Sand-dust image restoration based on reversing the blue channel prior | |
CN107301624B (en) | Convolutional neural network defogging method based on region division and dense fog pretreatment | |
CN109087254B (en) | Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method | |
CN111861896A (en) | UUV-oriented underwater image color compensation and recovery method | |
KR20100021952A (en) | Image enhancement processing method and apparatus for distortion correction by air particle like fog | |
CN108154492B (en) | A kind of image based on non-local mean filtering goes haze method | |
CN110782407B (en) | Single image defogging method based on sky region probability segmentation | |
CN111325688B (en) | Unmanned aerial vehicle image defogging method for optimizing atmosphere light by fusion morphology clustering | |
CN109377450A (en) | A kind of edge-protected denoising method | |
CN108133462B (en) | Single image restoration method based on gradient field region segmentation | |
CN107067375A (en) | A kind of image defogging method based on dark channel prior and marginal information | |
CN107977941B (en) | Image defogging method for color fidelity and contrast enhancement of bright area | |
CN115456905A (en) | Single image defogging method based on bright and dark region segmentation | |
CN110349113B (en) | Adaptive image defogging method based on dark primary color priori improvement | |
CN115170437A (en) | Fire scene low-quality image recovery method for rescue robot | |
Chen et al. | Improve transmission by designing filters for image dehazing | |
CN114693548A (en) | Dark channel defogging method based on bright area detection | |
CN109544470A (en) | A kind of convolutional neural networks single image to the fog method of boundary constraint | |
CN109636735B (en) | Rapid video defogging method based on space-time consistency constraint | |
CN115760640A (en) | Coal mine low-illumination image enhancement method based on noise-containing Retinex model | |
CN113379631B (en) | Image defogging method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |