CN111489302A - Maritime image enhancement method in fog environment - Google Patents
Maritime image enhancement method in fog environment Download PDFInfo
- Publication number
- CN111489302A CN111489302A CN202010231300.6A CN202010231300A CN111489302A CN 111489302 A CN111489302 A CN 111489302A CN 202010231300 A CN202010231300 A CN 202010231300A CN 111489302 A CN111489302 A CN 111489302A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- noise
- transmittance
- marine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000002834 transmittance Methods 0.000 claims abstract description 43
- 238000000149 argon plasma sintering Methods 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 27
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 9
- 150000001875 compounds Chemical class 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 239000003595 mist Substances 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 230000005540 biological transmission Effects 0.000 claims 1
- 238000013507 mapping Methods 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 238000007689 inspection Methods 0.000 abstract description 3
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The invention discloses a maritime image enhancement method in a fog environment, which comprises the steps of firstly estimating an initial transmittance graph and an atmospheric light value by adopting a dark channel first-pass algorithm, then correcting the transmittance graph by adopting a sky region soft segmentation method, then optimizing the transmittance graph by adopting a convolution neural network, and finally recovering a fog-free image through an inverted atmospheric light scattering model. The invention provides an enhancement method capable of effectively solving the problem of marine fog images, which is characterized in that the common occupation ratio of the sky area of the marine fog images is large, the difference exists in texture structure with common images, and the sky area restored by the traditional dark channel pre-inspection algorithm has obvious distortion.
Description
Technical Field
The invention relates to an image enhancement technology, in particular to a marine image enhancement method in a fog environment.
Background
The spread of the mist depends on the depth of the different locations. Various image enhancement methods have been widely used to remove haze from a single image, and these methods can be broadly divided into two main categories: enhancement-based methods and physical-based methods. These methods do not take into account the intrinsic physics of fog formation based on enhanced defogging methods such as histogram based, contrast based, and saturation based. The image contrast is enhanced only by some classical image enhancement techniques and therefore the image quality cannot be improved in the presence of non-uniform haze. In contrast, the physics-based approach focuses primarily on the degradation mechanism of the image. Both theoretical and experimental studies have shown that physically based defogging methods generally produce higher image quality under different imaging conditions.
The method is based on empirical statistics of haze-free image experiments, a Dark Channel Prior (DCP) method is found, which considers that in most non-haze blocks, at least one color channel has some pixels with very low intensity, under dark channel prior conditions, the haze can be estimated and removed using an atmospheric scattering model, however, DCP has poor haze removal quality in sky images and is computationally intensive, improved algorithms are proposed to overcome these limitations, Nishino et al use a factor MRF to model images to more accurately estimate scene radiation, Meng et al propose an effective regularized haze removal method to recover haze-free images through inherent boundary constraints, a machine learning framework to study relevant factors, a new set of energy transfer coefficients are developed to obtain a robust haze-based blurred image, a net-based neural network convolution, etc. A robust haze-based fuzzy-based on a net-based on a latent-based on a net-fuzzy-based neural-fuzzy-based learning algorithm, a robust-based on a net-based fuzzy-based on a neural-fuzzy-based network-based on which it is not possible to obtain a robust and a deep-fuzzy-based on a deep-fuzzy-like deep-fuzzy-based on a net-fuzzy-based on a neural-fuzzy-based on-fuzzy-based fuzzy-based-fuzzy-based on-fuzzy-based-fuzzy-based-fuzzy-based-fuzzy.
At present, although a plurality of researchers provide algorithms aiming at fog image enhancement, the proportion of a maritime image sky region is large, and the difference of the maritime image sky region and a common image in texture structure exists. The method for removing the fog of the common image easily causes the color distortion of the marine image in the sky area, so that the method for enhancing the marine image in the fog environment has important practical significance.
Disclosure of Invention
The invention aims to solve the technical problem of providing a marine image enhancement method in a fog environment aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a marine image enhancement method in a fog environment comprises the following steps:
1) obtaining a marine fog image to be processed, and estimating and obtaining an initial transmittance graph and an atmospheric light value through a dark channel first-aid algorithm;
2) correcting the initial transmittance graph by adopting a sky region soft segmentation method to obtain a corrected transmittance graph;
3) optimizing the modified transmittance map by a convolutional neural network;
4) and obtaining the defogged image after the marine image enhancement by adopting an inverted atmospheric light scattering model according to the optimized transmissivity graph.
According to the scheme, the initial transmittance graph and the atmospheric light value are obtained through dark channel prior-inspection algorithm estimation in the step 1), and the method is as follows
According to dark channel prior theory, the method comprises the following steps:
combining an atmospheric light scattering model to obtain:
wherein the content of the first and second substances,in the initial transmittance graph, w represents the degree of haze removal, a represents the atmospheric light value, and the haze removal effect is more pronounced as the value of w is larger, and w is set to 0.95 according to the experience.
According to the scheme, the sky region soft segmentation method is represented as follows:
in the formula (I), the compound is shown in the specification,is an initial graph of the transmittance of the light,for the modified transmittance map, a weighting function M and a luminance-based transmittance map t1Expressed as:
in the formula (I), the compound is shown in the specification,l represents an input imageLuminance of I (convert RGB image to HSI color space and select I channel as L), L*A value representing L95% of luminance, β is a scattering coefficient, β is wavelength-dependent from an optical point of view, and thus correlation coefficients β of RGB channels in a color image are set to 0.3324, 0.3433, and 0.3502, respectively.
According to the scheme, the convolutional neural network in the step 3) is a blind denoising convolutional neural network.
According to the scheme, the convolutional neural network in the step 3) comprises a noise estimation network and a noise removal network, and the structure is as follows:
a. noise estimation sub-network (CNN)E)
The noise estimation sub-network may estimate the noise level of the image from the noise image g and obtain a noise level mapThe noise estimation sub-network uses a 5-layer Conv (convolutional) network;
b. noise-removing sub-network (CNN)D)
The first layer of the noise removal network adopts Conv + Re L U, the middle layer adopts Conv + BN (batch normalization) + Re L U, the last layer adopts Conv, wherein the size of all filters in the denoising sub-network is set to be 3 ×, each layer network adopts Zero padding mode (Zero padding), the input and output sizes of each layer are kept consistent, and therefore artificial boundaries (Boundary Artifacts) are prevented from being generatedFinally, a noise-free estimation image is obtained
According to the scheme, the number of channels of each convolutional layer of the noise estimation sub-network in the step 3) is 32, the size of the filter is 3 × 3, and the activation function arranged behind each convolutional layer is Re L U.
According to the scheme, the loss function adopted by the convolutional neural network in the step 3) is a mixed loss function and comprises three sub-loss functions to constrain a noise level estimation diagramAnd noise-free estimated imageTo reliably estimate the noise level, an asymmetric MSE loss function is employedSum total variation regularization term loss functionConstrained noise level map sigma and estimation mapThe mathematical formula can be expressed as:
where, Ω represents the domain of the image,andoperators representing horizontal and vertical gradients, α and β are penalty factors for the loss function, α is set empirically to 0.3 whenWhen β is 1, otherwise β is 0, for the noise-free real image f and the estimated image fUsing a reconstruction loss functionConstrained, its formula can be expressed as:
thus, the loss function of the entire network can be expressed as:
according to the scheme, in the first step, 10000 image data sets are collected and manufactured, a data set of a transmittance graph is obtained through image depth information, and the transmittance graph is cut into 10000 × 512 large data sets;
according to the scheme, the mathematical formula of the atmospheric light scattering model inverted in the step 4) is as follows:
wherein max (a, b) represents the selection of the larger of a, b, tlbRepresents the lower bound of transmittance and empirically sets tlb=0.1。
The invention has the following beneficial effects:
the invention combines the traditional algorithm with deep learning through the transmissivity map, is more suitable for enhancing the marine fog image, provides a new convolution neural network and can obtain satisfactory optimization effect.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
fig. 2 is a schematic diagram of a neural network structure according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a method for enhancing a marine image in a fog environment includes the following steps:
inputting a marine fog image to be processed, and estimating an initial transmittance map and an atmospheric light value through a dark channel first-pass algorithm;
the dark channel pre-inspection algorithm estimates the initial transmittance map and atmospheric light values:
a. dark channel first-check algorithm
In a local area other than the sky, some pixels have channels of at least one color with very low values. Thus, for an arbitrary input image J, its dark channels can be represented as:
where r, g, b represent the three channels of the color image and Ω (x) represents a square window centered on pixel x. The dark channel prior theory states that:
Jdark→0
b. atmospheric scattering model
According to the physical characteristics of atmospheric light transmitted in the foggy day degradation process, the atmospheric scattering model can be expressed as:
I(x)=J(x)t(x)+A(1-t(x))
wherein, I represents a fog image, J represents a defogged image, t represents a transmissivity graph, and A represents an atmospheric light value.
c. Initial transmittance graph:
firstly, according to dark channel prior theory, the method comprises the following steps:
substituting the above equation into an atmospheric light scattering model yields:
whereinFor the initial transmittance map, w represents the degree of haze removal, and the haze removal effect becomes more significant as the value of w is larger, and is set to 0.95 according to the experience.
Secondly, correcting a transmissivity graph by adopting a sky region soft segmentation method;
the sky region soft segmentation method can be expressed as:
in the formula (I), the compound is shown in the specification,is an initial graph of the transmittance of the light,for the modified transmittance map, a weighting function M and a luminance-based transmittance map t1Can be expressed as:
in the formula (I), the compound is shown in the specification,l represents an input imageLuminance of I (convert RGB image to HSI color space and select I channel as L), L*A value representing L95% of luminance, β is a scattering coefficient, β is wavelength-dependent from an optical point of view, and thus correlation coefficients β of RGB channels in a color image are set to 0.3324, 0.3433, and 0.3502, respectively.
Thirdly, optimizing a transmittance graph through a convolutional neural network;
theoretically, the transmittance map should smooth out the details as much as possible while ensuring that the texture important in the image is sharp. Therefore, the invention adopts a blind denoising convolutional neural network to eliminate artifacts, noise and smooth details in the transmittance map.
a. Noise estimation sub-network (CNN)E)
The noise estimation sub-network may estimate the noise level of the image from the noise image g and obtain a noise level mapThe noise estimation sub-network uses a 5-layer Conv network with 32 channels per convolutional layer, a filter size of 3 × 3, and an activation function of Re L U set after each convolutional layer.
b. Noise-removing sub-network (CNN)D)
The first layer of the noise removal network adopts Conv + Re L U, the middle layer adopts Conv + BN (batch normalization) + Re L U, the last layer adopts Conv, wherein the size of all filters in the denoising sub-network is set to be 3 ×, each layer network adopts Zero padding mode (Zero padding), the input and output sizes of each layer are kept consistent, and therefore artificial boundaries (Boundary Artifacts) are prevented from being generatedFinally, a noise-free estimation image is obtained
c. Loss Function (L oss Function)
The invention adopts a mixed loss function to calculate the similarity of the images in training, and the noise can be effectively estimated and removed through the loss function.
The hybrid loss function includes three sub-loss functions to constrain the noise level estimate mapAnd noise-free estimated imageTo reliably estimate the noise level, an asymmetric MSE loss function is employedSum total variation regularization term loss functionConstrained noise level map sigma and estimation mapThe mathematical formula can be expressed as:
where, Ω represents the domain of the image,andoperators representing horizontal and vertical gradients, α and β are penalty factors for the loss function, α is set empirically to 0.3 whenWhen β is 1, otherwise β is 0, for the noise-free real image f and the estimated image fUsing a reconstruction loss functionConstrained, its formula can be expressed as:
thus, the loss function of the entire network can be expressed as:
the training method of the neural network in this embodiment is as follows:
1) 10000 image data sets are collected and manufactured, a data set of a transmittance graph is obtained through image depth information, and the transmittance graph is cut into 10000 × 512 large data sets;
the collected marine fog image comprises a plurality of common marine objects, such as sea, reef, bridge column, ship, bridge and the like;
2) training and learning the original transmittance graph and the noisy transmittance graph by adopting a convolutional neural network training frame respectively to obtain training parameters and test the denoising effect;
and fourthly, recovering by adopting an inverted atmospheric light scattering model to obtain a defogged image.
The mathematical formula of the inverted atmospheric light scattering model is as follows:
wherein max (a, b) represents the selection of the larger of a, b, tlbRepresents the lower bound of transmittance and empirically sets tlb=0.1。
I denotes the fog image, J denotes the defogged image, and a denotes the atmospheric light value.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.
Claims (9)
1. A marine image enhancement method in a fog environment is characterized by comprising the following steps:
1) obtaining a marine fog image to be processed, and estimating and obtaining an initial transmittance graph and an atmospheric light value through a dark channel first-aid algorithm;
2) correcting the initial transmittance graph by adopting a sky region soft segmentation method to obtain a corrected transmittance graph;
3) optimizing the modified transmittance map by a convolutional neural network;
4) and obtaining the defogged image after the marine image enhancement by adopting an inverted atmospheric light scattering model according to the optimized transmissivity graph.
2. The method for enhancing the maritime affairs image in the fog environment according to claim 1, wherein the initial transmittance map and the atmospheric light value are obtained in the step 1) through dark channel prior algorithm estimation, and the method comprises the following specific steps:
according to dark channel prior theory, the method comprises the following steps:
combining an atmospheric light scattering model to obtain:
3. The marine image enhancement method in a fog environment according to claim 1, wherein the sky region soft segmentation method is represented as:
in the formula (I), the compound is shown in the specification,is an initial graph of the transmittance of the light,for the modified transmittance map, a weighting function M and a luminance-based transmittance map t1Expressed as:
4. The marine image enhancement method in the fog environment of claim 1, wherein the convolutional neural network in the step 3) is a blind denoising convolutional neural network.
5. The method for enhancing the marine image in the fog environment according to claim 1, wherein the convolutional neural network in the step 3) comprises a noise estimation network and a noise removal network, and the structure is as follows:
a. noise estimation sub-network
The noise estimation sub-network may estimate the noise level of the image from the noise image g and obtain a noise level mapThe noise estimation sub-network uses a 5-layer Conv convolutional network;
b. noise-removing sub-network
The first layer of the noise removal sub-network adopts Conv + Re L U, the middle layer adopts Conv + BN + Re L U, the last layer adopts Conv, wherein the size of all filters in the noise removal sub-network is set to be 3 × 3, each layer of network adopts a zero filling mode, the input and output sizes of each layer are kept consistent, after the second layer, each layer adds batch normalization operation between Conv and Re L U, and meanwhile, the noise removal sub-network learns residual mapping in a residual learning modeFinally, a noise-free estimation image is obtained
6. The marine image enhancement method in the fog environment of claim 5, wherein the number of channels of each convolutional layer of the noise estimation sub-network in the step 3) is 32, the size of the filter is 3 × 3, and the activation function arranged behind each convolutional layer is Re L U.
7. The method for enhancing maritime affairs image under fog environment according to claim 5, wherein the loss function adopted by the convolutional neural network in the step 3) is a mixed loss function comprising three sub-loss functions to constrain the noise level estimation mapAnd noise-free estimated imageTo reliably estimate the noise level, an asymmetric MSE loss function is employedSum total variation regularization term loss functionConstrained noise level map sigma and estimation mapThe mathematical formula is expressed as:
where, Ω represents the domain of the image,andoperators representing horizontal and vertical gradients, α and β are penalty coefficients of the loss function;
for a noise-free real image f and an estimated imageUsing a reconstruction loss functionConstrained, its formula can be expressed as:
thus, the loss function of the entire network can be expressed as:
8. the marine image enhancement method in the fog environment according to claim 5, wherein the training method of the convolutional neural network is as follows:
10000 image data sets are collected and manufactured, a data set of a transmittance graph is obtained through image depth information, and the transmittance graph is cut into 10000 × 512 large data sets;
and respectively training and learning the original transmittance graph and the noisy transmittance graph by adopting a convolutional neural network training framework to obtain training parameters and test the denoising effect.
9. The marine image enhancement method under the fog environment according to claim 1, wherein the mathematical formula of the atmospheric light scattering model inverted in the step 4) is as follows:
wherein max (a, b) represents the selection of the larger of a, b, tlbRepresenting a lower transmission bound.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010231300.6A CN111489302B (en) | 2020-03-27 | 2020-03-27 | Maritime image enhancement method in fog environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010231300.6A CN111489302B (en) | 2020-03-27 | 2020-03-27 | Maritime image enhancement method in fog environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111489302A true CN111489302A (en) | 2020-08-04 |
CN111489302B CN111489302B (en) | 2023-12-05 |
Family
ID=71810818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010231300.6A Active CN111489302B (en) | 2020-03-27 | 2020-03-27 | Maritime image enhancement method in fog environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111489302B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112927157A (en) * | 2021-03-08 | 2021-06-08 | 电子科技大学 | Improved dark channel defogging method using weighted least square filtering |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103150708A (en) * | 2013-01-18 | 2013-06-12 | 上海交通大学 | Image quick defogging optimized method based on black channel |
CN104299192A (en) * | 2014-09-28 | 2015-01-21 | 北京联合大学 | Single image defogging method based on atmosphere light scattering physical model |
US9288458B1 (en) * | 2015-01-31 | 2016-03-15 | Hrl Laboratories, Llc | Fast digital image de-hazing methods for real-time video processing |
CN105761230A (en) * | 2016-03-16 | 2016-07-13 | 西安电子科技大学 | Single image defogging method based on sky region segmentation processing |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
KR20180050832A (en) * | 2016-11-07 | 2018-05-16 | 한국과학기술원 | Method and system for dehazing image using convolutional neural network |
CN110211067A (en) * | 2019-05-27 | 2019-09-06 | 哈尔滨工程大学 | One kind being used for UUV Layer Near The Sea Surface visible images defogging method |
US20190287219A1 (en) * | 2018-03-15 | 2019-09-19 | National Chiao Tung University | Video dehazing device and method |
CN110782407A (en) * | 2019-10-15 | 2020-02-11 | 北京理工大学 | Single image defogging method based on sky region probability segmentation |
AU2020100274A4 (en) * | 2020-02-25 | 2020-03-26 | Huang, Shuying DR | A Multi-Scale Feature Fusion Network based on GANs for Haze Removal |
-
2020
- 2020-03-27 CN CN202010231300.6A patent/CN111489302B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103150708A (en) * | 2013-01-18 | 2013-06-12 | 上海交通大学 | Image quick defogging optimized method based on black channel |
CN104299192A (en) * | 2014-09-28 | 2015-01-21 | 北京联合大学 | Single image defogging method based on atmosphere light scattering physical model |
US9288458B1 (en) * | 2015-01-31 | 2016-03-15 | Hrl Laboratories, Llc | Fast digital image de-hazing methods for real-time video processing |
CN105761230A (en) * | 2016-03-16 | 2016-07-13 | 西安电子科技大学 | Single image defogging method based on sky region segmentation processing |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
KR20180050832A (en) * | 2016-11-07 | 2018-05-16 | 한국과학기술원 | Method and system for dehazing image using convolutional neural network |
US20190287219A1 (en) * | 2018-03-15 | 2019-09-19 | National Chiao Tung University | Video dehazing device and method |
CN110211067A (en) * | 2019-05-27 | 2019-09-06 | 哈尔滨工程大学 | One kind being used for UUV Layer Near The Sea Surface visible images defogging method |
CN110782407A (en) * | 2019-10-15 | 2020-02-11 | 北京理工大学 | Single image defogging method based on sky region probability segmentation |
AU2020100274A4 (en) * | 2020-02-25 | 2020-03-26 | Huang, Shuying DR | A Multi-Scale Feature Fusion Network based on GANs for Haze Removal |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112927157A (en) * | 2021-03-08 | 2021-06-08 | 电子科技大学 | Improved dark channel defogging method using weighted least square filtering |
CN112927157B (en) * | 2021-03-08 | 2023-08-15 | 电子科技大学 | Improved dark channel defogging method adopting weighted least square filtering |
Also Published As
Publication number | Publication date |
---|---|
CN111489302B (en) | 2023-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148095B (en) | Underwater image enhancement method and enhancement device | |
CN111292258B (en) | Image defogging method based on dark channel prior and bright channel prior | |
CN106530257A (en) | Remote sensing image de-fogging method based on dark channel prior model | |
CN109118446B (en) | Underwater image restoration and denoising method | |
CN110211070B (en) | Low-illumination color image enhancement method based on local extreme value | |
CN110807742B (en) | Low-light-level image enhancement method based on integrated network | |
CN104091310A (en) | Image defogging method and device | |
CN107705258B (en) | Underwater image enhancement method based on three-primary-color combined pre-equalization and deblurring | |
CN110570381B (en) | Semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation | |
CN110689490A (en) | Underwater image restoration method based on texture color features and optimized transmittance | |
CN114693548B (en) | Dark channel defogging method based on bright area detection | |
CN111462022A (en) | Underwater image sharpness enhancement method | |
CN109345479B (en) | Real-time preprocessing method and storage medium for video monitoring data | |
Cai et al. | Underwater image processing system for image enhancement and restoration | |
CN107451962B (en) | Image defogging method and device | |
CN116029944B (en) | Self-adaptive contrast enhancement method and device for gray level image | |
CN111489302B (en) | Maritime image enhancement method in fog environment | |
CN116823686B (en) | Night infrared and visible light image fusion method based on image enhancement | |
CN116229404A (en) | Image defogging optimization method based on distance sensor | |
CN114170101A (en) | Structural texture keeping low-light image enhancement method and system based on high-frequency and low-frequency information | |
CN110796607B (en) | Deep learning low-illumination image enhancement method based on retina cerebral cortex theory | |
CN113012067B (en) | Retinex theory and end-to-end depth network-based underwater image restoration method | |
CN113379632B (en) | Image defogging method and system based on wavelet transmissivity optimization | |
CN113112429B (en) | Universal enhancement frame for foggy images under complex illumination conditions | |
CN115496694B (en) | Method for recovering and enhancing underwater image based on improved image forming model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |