CN113112429A - Universal enhancement framework for foggy images under complex illumination condition - Google Patents

Universal enhancement framework for foggy images under complex illumination condition Download PDF

Info

Publication number
CN113112429A
CN113112429A CN202110457241.9A CN202110457241A CN113112429A CN 113112429 A CN113112429 A CN 113112429A CN 202110457241 A CN202110457241 A CN 202110457241A CN 113112429 A CN113112429 A CN 113112429A
Authority
CN
China
Prior art keywords
layer
detail
base layer
gamma
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110457241.9A
Other languages
Chinese (zh)
Other versions
CN113112429B (en
Inventor
米泽田
李圆圆
付先平
张军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110457241.9A priority Critical patent/CN113112429B/en
Publication of CN113112429A publication Critical patent/CN113112429A/en
Application granted granted Critical
Publication of CN113112429B publication Critical patent/CN113112429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • G06T5/94

Abstract

The invention discloses a universal enhancement framework for a foggy image under a complex illumination condition, which comprises the steps of constructing a decomposition model, and decomposing an input image into a basic layer and a detail layer; performing mask 1 estimation in the base layer to obtain the illumination distribution of the input image; the brightness of the base layer is improved, a gamma correction strategy is adopted to obtain a self-adaptive gamma factor, and the illumination of the base layer is adjusted according to the gamma factor to obtain the base layer with the improved brightness; performing mask 2 estimation on the detail layer to obtain refined detail mask estimation; carrying out transmission map estimation in an original image; designing a contrast enhancement factor; obtaining an enhanced detail layer according to the contrast enhancement factor; and reconstructing the basic layer with the improved brightness and the detail layer with the improved contrast to obtain the restored image. The method has higher universality and good robustness, can be suitable for low-light scenes with various illumination distributions, and obviously improves the visibility of underwater images.

Description

Universal enhancement framework for foggy images under complex illumination condition
Technical Field
The invention relates to the field of images, in particular to a universal enhancement framework for fog images under complex illumination conditions.
Background
Many defogging methods have been proposed to improve the visibility of a blurred scene from a single image. Most pioneer works address this problem by inferring image degradation models, which is an inappropriate problem. Strong a priori knowledge or assumptions are required to estimate the parameters, especially the transmittance and the atmospheric light. Generally, the atmospheric light in a hazy image during the day is considered to be the brightest area in the background. However, the plurality of light sources seriously affect the accuracy of the estimation of the atmospheric light due to uneven illumination under night conditions. As a result, although methods based on Dark Channel Priors (DCP) 1 and variants [2] to [4] have been significantly successful, these demonstrate their effectiveness well only when demisting during the day, and are far from satisfactory in hazy scenes during the night.
To address this challenging problem, some researchers have focused on solving the nighttime defogging problem. Pei and Lee [5] first apply a color transfer function to reduce the color cast of the nightly hazy image before defogging based on a Dark Channel Prior (DCP). But due to the variation of the original color distribution, an artifact effect may occur. Similarly, Zhang [6] performs color correction and estimates the non-uniform incident illumination prior to DCP-based defogging. However, the light compensation will affect the effect of the subsequent color correction. Given that the local maximum intensity is mainly contributed by ambient lighting, Zhang [7] proposes a Maximum Reflectance Prior (MRP) that is specific to night time. Li 8 proposes a layer separation strategy to remove the halo of the light source by modeling the non-uniform illuminated night blurred image as a linear superposition of a low-light background and the lighting effect. Finally, a DCP-based defogging operation is followed to further improve visibility. As observed, the background recovered by these methods tends to be too dark. Unfortunately, all of the above methods are limited to handling nighttime conditions. To more accurately estimate atmospheric light and to be suitable for daytime and nighttime defogging, Anguti [9] introduced an efficient fusion-based technique that used local atmospheric light estimation. However, manually selecting the size of the patch is inefficient and the technique does not work well when lighting conditions are very poor. Zhu [10] introduces a Color Attenuation Prior (CAP), and analyzes a large amount of foggy images to find that the concentration of the fog changes along with the change of the depth of field, and the higher the concentration of the fog, the larger the depth of field, and the larger the difference between the brightness and the saturation of the images.
Reference documents:
[1]K.He,J.Sun,and X.Tang,“Single image haze removal using dark channel prior,”IEEE transactions on pattern analysis and machine intelligence,vol.33,no.12,pp.2341–2353,2010.
[2]H.Xu,J.Guo,Q.Liu,and L.Ye,“Fast image dehazing using improved dark channel prior,”in 2012IEEE International Conference on Information Science and Technology.IEEE,2012,pp.663–667.
[3]J.-B.Wang,N.He,L.-L.Zhang,and K.Lu,“Single image dehazing with a physical model and dark channel prior,”Neurocomputing,vol.149,pp.718–728,2015.
[4]Q.Wu,J.Zhang,W.Ren,W.Zuo,and X.Cao,“Accurate transmission estimation for removing haze and noise from a single image,”IEEE Transactions on Image Processing,vol.29,pp.2583–2597,2019.
[5]S.C.Pei and T.Y.Lee,“Nighttime haze removal using color transfer pre-processing and dark channel prior,”in Image Processing(ICIP),19th IEEE International Conference on,2012,pp.957–960.
[6]J.Zhang,Y.Cao,and Z.Wang,“Nighttime haze removal based on a new imaging model,”in IEEE International Conference on Image Processing(ICIP),2014,pp.4557–4561.
[7]J.Zhang,Y.Cao,S.Fang,Y.Kang,and C.W.Chen,“Fast haze removal for nighttime image using maximum reflectance prior,”in IEEE Conference on Computer Vision&Pattern Recognition(CVPR),2017,pp.7418–7426.
[8]Y.Li,R.T.Tan,and M.S.Brown,“Nighttime haze removal with glow and multiple light colors,”in IEEE International Conference on Computer Vision(ICCV),2015,pp.226–234.
[9]C.Ancuti,C.O.Ancuti,C.D.Vleeschouwer,and A.C.Bovik,“Day and night-time dehazing by local airlight estimation,”IEEE Transactions on Image Processing,vol.29,no.99,pp.6264–6275,2020.
[10]Q.Zhu,J.Mai,and L.Shao,“A fast single image haze removal algorithm using color attenuation prior,”IEEE transactions on image processing,vol.24,no.11,pp.3522–3533,2015.
disclosure of Invention
The present invention provides a generic enhancement framework for hazy images under complex lighting conditions to overcome the above technical problems.
A universal enhancement framework for fog images in complex lighting conditions, comprising,
the method comprises the following steps: constructing a decomposition model, and defining an input image by using a defogging model;
step two: a base-detail decomposition, which decomposes an original input image into a base layer and a detail layer;
step three: performing mask 1 estimation in the base layer, and comparing the average value of each channel in the base layer with the average value of the base layer to obtain the illumination distribution of the input image;
step four: the method comprises the steps of carrying out brightness improvement on a basic layer, obtaining a self-adaptive gamma factor by adopting a gamma correction strategy, and adjusting the illumination of the basic layer according to the gamma factor to obtain the basic layer with improved brightness;
step five: performing mask 2 estimation on the detail layer to obtain refined detail mask estimation;
step six: carrying out transmission map estimation in an original image;
step seven: estimating a contrast enhancement factor, and designing the contrast enhancement factor;
step eight: enhancing the detail layer, and obtaining the enhanced detail layer according to the contrast enhancement factor of the step seven;
step nine: and (5) reconstructing to obtain a restored image according to the base layer with the improved brightness obtained in the step four and the detail layer with the improved contrast obtained in the step eight.
Preferably, obtaining the base layer with improved brightness means performing correction through the formulas (1), (2) and (3),
Figure BDA0003040947290000031
gamma’=||1-β*gamma|| (2)
Figure BDA0003040947290000032
where α and β are both constant parameters.
Preferably, designing the contrast enhancement factor means calculating by equation (4),
Figure BDA0003040947290000033
wherein the content of the first and second substances,
Figure BDA0003040947290000034
is a hyperbolic tangent function, gamma is a constant parameter controlling the sigmoid curve, and k is a contrast enhancement factor.
The universal enhancement framework for the foggy images under the complex illumination condition has higher universality, robustness and accuracy, and can be suitable for weak light scenes with various illumination distributions, such as non-uniform illumination images, pure low-illumination images, remote sensing images, underwater images and the like; layering the image to obtain two masks representing illumination distribution and effective detail area respectively, benefiting the two masks similar to the weight graph, and the proposed framework can achieve the effects of area self-adaptive illumination adjustment and contrast enhancement; qualitative and quantitative comparison shows that the method has excellent robustness, accuracy and flexibility.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a graph comparing PSNR evaluation results of the present invention;
FIG. 3 is a graph comparing SSIM evaluation results according to the present invention;
FIG. 4 is a graph of non-uniform illumination comparison results;
FIG. 5 is a graph of remote sensing image comparison results;
fig. 6 is a graph of underwater image comparison results.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of the present invention, and as shown in fig. 1, the method of this embodiment may include: a universal enhancement framework for hazy images in complex lighting conditions, comprising the steps of:
step 1: constructing a decomposition model, and defining an input image by using a defogging model;
Figure BDA0003040947290000041
where x is the pixel value, I (x) is the input image, J (x) is the enhanced image, t (x) is the transmission map, B is the global background light, l's(x) And l't(x) The enhanced base layer and detail layer are shown separately.
Step 2: the basic-detail decomposition is carried out,
considering the original input image as a superposition of two layers, namely a base layer and a detail layer, equation (1) can be simply expressed as:
Figure BDA0003040947290000051
wherein c belongs to { red, green, blue }, and represents three RGB color channels, Is(x) Is a base layer, It(x) Is a fine layer.
In this problem, a TV image reconstruction formula based on the rudin walker-Fatemi method is employed, and based on TV regularization, the base layer is obtained by minimizing the following objective function:
Figure BDA0003040947290000052
where x is the pixel index, λ is a parameter that adjusts the smoothness of the base layer,
Figure BDA0003040947290000056
is a gradient operator, which is a linear operator,
Figure BDA0003040947290000057
the gradient of the base layer is regularized by a 2 norm and the base layer is solved by TV-L2. Then according to the formula (3),
Figure BDA0003040947290000053
the fine layer I can be obtainedt(x)。
And step 3: the mask 1 estimation is performed in the base layer,
comparing the average value of each channel in the basic layer with the average value of the basic layer, namely when the average value of the three channels is larger than the average value of the basic layer, marking as 1, otherwise, marking as 0, and obtaining the illumination distribution M representing the input imageIIt can be expressed as:
Figure BDA0003040947290000054
wherein, in order to avoid artifacts, M isIObtaining refined M by using guided filteringI
And 4, step 4: the brightness of the base layer is boosted by the brightness boosting circuit,
due to M in step 3IThe light source position can be indicated, therefore the invention proposes a new gamma correction strategy, namely, the difference value between the mask 1 and the average value thereof is utilized, when the pixel value of the mask 1 is larger than the average value thereof, the pixel value is larger, and the suppression is needed; on the contrary, the gamma factor can be obtained when enhancement is needed, and the self-adaptive gamma factor can be obtained by further constraining the gamma factor. The method can adaptively adjust the base layer I according to the distribution of lightsObtaining a luminance-enhanced base layer I'sThe formula can be expressed as:
Figure BDA0003040947290000055
gamma’=||1-β*gamma|| (6)
Figure BDA0003040947290000061
wherein, both alpha and beta are constant parameters, beta is a constant for controlling the brightness improvement degree, and is set as 1.5 in the invention; and for IsIf the brightness is desired to be rapidly improved in the region with a small pixel value, otherwise, the brightness is improved in a smaller amplitude, and the gamma' is less than 1 according to the gamma correction principle; then 0 can be deduced<gamma<1.33; further push out 0.75<α<1 or 1<α<1.33. Selecting 10 underwater images with the numbers of a, b, c, d, f, g, h, i, j and k, and respectively taking alpha as [0.8,0.9,1.1,1.2,1.3 ]]And performing experimental verification, calculating the peak signal-to-noise ratio and the structural similarity measure of the image, and obtaining the peak signal-to-noise ratio (PSNR) and the structure of the enhanced image when alpha is 1.2The similarity measure (SSIM) works best, as shown in fig. 2, fig. 3; therefore, in the present invention, α is 1.2.
And 5: the mask 2 evaluation is performed in the detail layer,
using a Discrete Cosine Transform (DCT) on each 8 x 8 color block in the detail layer and representing the DCT coefficients of an 8 x 8 block as matrix a, the probability that the block is called part of the scene detail is expressed as:
Figure BDA0003040947290000062
where u, v represent positions in the DCT, and A is taken out1,1,A1,2,A2,1Except the sum of the squares of all DCT coefficients and thresholding the likelihood to binary divide each block.
Setting the threshold value to 0.1, i.e. when
Figure BDA0003040947290000063
And when the value is less than the threshold value, recording as 0, otherwise, recording as 1.
The initial detail mask estimate is denoted MTIt is refined using soft matting, i.e. the refined M is obtained by minimizing the following objective functionT
argmin(mT-mT)T(mT-mT)+τmT TLsmT (9)
Wherein m isTAnd mTAre respectively a matrix
Figure BDA0003040947290000064
MTIs a regularization parameter, set to 10 in the present invention-5,LsIs a laplacian matrix generated from the original image i (x).
Step 6: the transmission map estimation is performed in the original image,
the atmospheric light B is estimated using the dark channel, first selecting the brightest 0.1% pixels in the dark channel, and then selecting the pixels of these pixels that result in the highest intensity in the image i (x) as atmospheric light.
First, converting I (x) -B into a space coordinate system, and then estimating an initial transmission diagram according to the value of the radius r (the distance from the origin, namely | | | I-B |)
Figure BDA0003040947290000065
Then through minimizing the pairs
Figure BDA0003040947290000066
To estimate the transmission map t (x):
Figure BDA0003040947290000067
where v, ω are weighting parameters, σ (x) is the standard deviation along each fog line, N (x) represents the four neighbors of pixel x, t (x) is the refined transmission map, t1(x)、
Figure BDA0003040947290000071
Are each t (x),
Figure BDA0003040947290000072
In the form of a vector.
And 7: contrast enhancement factor estimation
Having created mask 2 representing scene detail information, the present invention attempts to greatly enhance the effective detail without affecting the amplification of the fine detail (including noise). To achieve this goal, the present invention introduces a tan h function stretching MTTo scale the texture appropriately, the contrast enhancement factor can therefore be expressed as:
Figure BDA0003040947290000073
wherein the content of the first and second substances,
Figure BDA0003040947290000074
is a hyperbolic tangent function, and gamma is a constant controlling sigmoid curveThe parameter, set to 2.5 in the present invention.
And 8: the enhancement of the detail layer is carried out,
the enhanced detail layers can be obtained according to the contrast enhancement factor obtained in step 7 as follows:
I′t=k·It (12)
and step 9: the reconstruction is carried out by the user,
according to the base layer with the improved brightness obtained in the step 4 and the detail layer with the improved contrast obtained in the step 8, the restored image can be obtained:
I′=I′s+I′t (13)
six land images under non-uniform illumination conditions are selected, and comparison experiments are respectively carried out by six methods of DCP [1], CAP [10], Li et al [8], Zhang et al [6], MRP [7] and the invention, and the comparison experiment results are shown in figure 4. Two remote sensing images are selected, six methods including DCP [1], CAP [10], Li et al [8], Zhang et al [6], MRP [7] and the invention are respectively used for comparison experiments, and the comparison experiment results are shown in figure 5. Four underwater images are selected, and six methods including DCP [1], CAP [10], Li et al [8], Zhang et al [6], MRP [7] and the invention are respectively used for comparison experiments, and the comparison experiment results are shown in FIG. 6.
The beneficial effects of the whole are as follows:
(1) the proposed enhancement framework has higher universality, robustness and accuracy, and can be used for weak light scenes with various illumination distributions, such as non-uniform illumination images, pure low-illumination images, remote sensing images, underwater images and the like;
(2) layering the image to obtain two masks representing illumination distribution and effective detail area respectively, benefiting the two masks similar to the weight graph, and the proposed framework can achieve the effects of area self-adaptive illumination adjustment and contrast enhancement;
(3) qualitative and quantitative comparison shows that the method has excellent robustness, accuracy and flexibility.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (3)

1. A universal enhancement framework for fog images in complex lighting conditions, comprising,
the method comprises the following steps: constructing a decomposition model, and defining an input image by using a defogging model;
step two: a base-detail decomposition, which decomposes an original input image into a base layer and a detail layer;
step three: performing mask 1 estimation in the base layer, and comparing the average value of each channel in the base layer with the average value of the base layer to obtain the illumination distribution of the input image;
step four: the method comprises the steps of carrying out brightness improvement on a basic layer, obtaining a self-adaptive gamma factor by adopting a gamma correction strategy, and adjusting the illumination of the basic layer according to the gamma factor to obtain the basic layer with improved brightness;
step five: performing mask 2 estimation on the detail layer to obtain refined detail mask estimation;
step six: carrying out transmission map estimation in an original image;
step seven: estimating a contrast enhancement factor, and designing the contrast enhancement factor;
step eight: enhancing the detail layer, and obtaining the enhanced detail layer according to the contrast enhancement factor of the step seven;
step nine: and (5) reconstructing to obtain a restored image according to the base layer with the improved brightness obtained in the step four and the detail layer with the improved contrast obtained in the step eight.
2. The universal enhancement framework for fog images under complex lighting conditions as claimed in claim 1, wherein the base layer with enhanced brightness is obtained by correcting through equations (1), (2), (3),
Figure FDA0003040947280000011
gamma’=||1-β*gamma|| (2)
Figure FDA0003040947280000012
where α and β are both constant parameters.
3. The universal enhancement framework for fog images under complex lighting conditions as claimed in claim 1, wherein the design contrast enhancement factor is calculated by formula (4),
Figure FDA0003040947280000013
wherein the content of the first and second substances,
Figure FDA0003040947280000014
is a hyperbolic tangent function, gamma is a constant parameter controlling the sigmoid curve, and k is a contrast enhancement factor.
CN202110457241.9A 2021-04-27 2021-04-27 Universal enhancement frame for foggy images under complex illumination conditions Active CN113112429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110457241.9A CN113112429B (en) 2021-04-27 2021-04-27 Universal enhancement frame for foggy images under complex illumination conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110457241.9A CN113112429B (en) 2021-04-27 2021-04-27 Universal enhancement frame for foggy images under complex illumination conditions

Publications (2)

Publication Number Publication Date
CN113112429A true CN113112429A (en) 2021-07-13
CN113112429B CN113112429B (en) 2024-04-16

Family

ID=76720145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110457241.9A Active CN113112429B (en) 2021-04-27 2021-04-27 Universal enhancement frame for foggy images under complex illumination conditions

Country Status (1)

Country Link
CN (1) CN113112429B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5825936A (en) * 1994-09-22 1998-10-20 University Of South Florida Image analyzing device using adaptive criteria
KR101448164B1 (en) * 2013-04-22 2014-10-14 금오공과대학교 산학협력단 Method for Image Haze Removal Using Parameter Optimization
WO2014193080A1 (en) * 2013-05-28 2014-12-04 삼성테크윈 주식회사 Method and device for removing haze in single image
WO2015103739A1 (en) * 2014-01-08 2015-07-16 富士通株式会社 Apparatus, electronic device and method for enhancing image contrast
US20160027160A1 (en) * 2014-07-28 2016-01-28 Disney Enterprises, Inc. Temporally coherent local tone mapping of high dynamic range video
WO2016159884A1 (en) * 2015-03-30 2016-10-06 Agency For Science, Technology And Research Method and device for image haze removal
CN106960421A (en) * 2017-03-16 2017-07-18 天津大学 Evening images defogging method based on statistical property and illumination estimate
CN108431886A (en) * 2015-12-21 2018-08-21 皇家飞利浦有限公司 Optimize high dynamic range images for particular display
CN110852956A (en) * 2019-07-22 2020-02-28 江苏宇特光电科技股份有限公司 Method for enhancing high dynamic range image
CN112634384A (en) * 2020-12-23 2021-04-09 上海富瀚微电子股份有限公司 Method and device for compressing high dynamic range image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5825936A (en) * 1994-09-22 1998-10-20 University Of South Florida Image analyzing device using adaptive criteria
KR101448164B1 (en) * 2013-04-22 2014-10-14 금오공과대학교 산학협력단 Method for Image Haze Removal Using Parameter Optimization
WO2014193080A1 (en) * 2013-05-28 2014-12-04 삼성테크윈 주식회사 Method and device for removing haze in single image
WO2015103739A1 (en) * 2014-01-08 2015-07-16 富士通株式会社 Apparatus, electronic device and method for enhancing image contrast
US20160027160A1 (en) * 2014-07-28 2016-01-28 Disney Enterprises, Inc. Temporally coherent local tone mapping of high dynamic range video
WO2016159884A1 (en) * 2015-03-30 2016-10-06 Agency For Science, Technology And Research Method and device for image haze removal
CN108431886A (en) * 2015-12-21 2018-08-21 皇家飞利浦有限公司 Optimize high dynamic range images for particular display
CN106960421A (en) * 2017-03-16 2017-07-18 天津大学 Evening images defogging method based on statistical property and illumination estimate
CN110852956A (en) * 2019-07-22 2020-02-28 江苏宇特光电科技股份有限公司 Method for enhancing high dynamic range image
CN112634384A (en) * 2020-12-23 2021-04-09 上海富瀚微电子股份有限公司 Method and device for compressing high dynamic range image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
EUNSUNG LEE ETAL.: "Contrast Enhancement Using Dominant Brightness Level Analysis and Adaptive Intensity Transformation for Remote Sensing Images", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 10, no. 1, 31 December 2012 (2012-12-31), pages 62 - 66, XP055566774, DOI: 10.1109/LGRS.2012.2192412 *
付青青;景春雷;裴彦良;阚光明;张正炳;吴爱平;: "基于非锐化掩模引导滤波的水下图像细节增强算法研究", 海洋学报, no. 07, 15 July 2020 (2020-07-15), pages 134 - 142 *
杨爱萍;赵美琪;宋曹春洋;王金斌;: "基于色调映射和暗通道融合的弱光图像增强", 天津大学学报(自然科学与工程技术版), no. 07, 3 July 2018 (2018-07-03), pages 106 - 114 *
田子建 等: "基于双域分解的矿井下图像增强算法", 《光子学报》, vol. 48, no. 5, 31 May 2019 (2019-05-31), pages 1 - 13 *

Also Published As

Publication number Publication date
CN113112429B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
Zhang et al. Enhancing underwater image via color correction and bi-interval contrast enhancement
Bai et al. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
CN110782407B (en) Single image defogging method based on sky region probability segmentation
CN111080686B (en) Method for highlight removal of image in natural scene
Kumar et al. An improved Gamma correction model for image dehazing in a multi-exposure fusion framework
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN111968065A (en) Self-adaptive enhancement method for image with uneven brightness
Priyanka et al. Low-light image enhancement by principal component analysis
CN113284061A (en) Underwater image enhancement method based on gradient network
Zhu et al. Underwater image enhancement based on colour correction and fusion
CN114972102A (en) Underwater image enhancement method based on global variable contrast enhancement and local correction
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN115619662A (en) Image defogging method based on dark channel prior
CN113112429B (en) Universal enhancement frame for foggy images under complex illumination conditions
CN113256533B (en) Self-adaptive low-illumination image enhancement method and system based on MSRCR
CN115908155A (en) NSST domain combined GAN and scale correlation coefficient low-illumination image enhancement and denoising method
CN115034985A (en) Underwater image enhancement method
CN114549342A (en) Underwater image restoration method
CN107317968A (en) Image defogging method, device, computer can storage medium and mobile terminals
Mi et al. A generalized enhancement framework for hazy images with complex illumination
CN106952243A (en) UUV Layer Near The Sea Surface infrared image self adaptation merger histogram stretches Enhancement Method
CN112529802B (en) Atmospheric scattering degraded image recovery method based on scattering coefficient ratio estimation
Deng Mathematical approaches to digital color image denoising
Radhika et al. IMAGE CONTRAST ENCHANCEMENT USING BRIGHTNESS PRESERVATION DYNAMIC HISTOGRAM EQUALIZATION

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant