CN112767286A - Dark light image self-adaptive enhancement method based on intensive deep learning - Google Patents
Dark light image self-adaptive enhancement method based on intensive deep learning Download PDFInfo
- Publication number
- CN112767286A CN112767286A CN202110251337.XA CN202110251337A CN112767286A CN 112767286 A CN112767286 A CN 112767286A CN 202110251337 A CN202110251337 A CN 202110251337A CN 112767286 A CN112767286 A CN 112767286A
- Authority
- CN
- China
- Prior art keywords
- illumination
- map
- network
- image
- reflectivity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000013135 deep learning Methods 0.000 title claims abstract description 11
- 238000005286 illumination Methods 0.000 claims abstract description 62
- 238000002310 reflectometry Methods 0.000 claims abstract description 28
- 230000006870 function Effects 0.000 claims abstract description 22
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 21
- 238000011084 recovery Methods 0.000 claims abstract description 20
- 230000003044 adaptive effect Effects 0.000 claims abstract description 12
- 101100049050 Arabidopsis thaliana PVA41 gene Proteins 0.000 claims abstract description 7
- 238000013528 artificial neural network Methods 0.000 claims abstract description 5
- 238000003062 neural network model Methods 0.000 claims abstract description 5
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 3
- 238000012549 training Methods 0.000 claims description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 238000013461 design Methods 0.000 claims description 7
- 230000008030 elimination Effects 0.000 abstract description 2
- 238000003379 elimination reaction Methods 0.000 abstract description 2
- 238000011156 evaluation Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 9
- 239000000126 substance Substances 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a dim light image self-adaptive enhancement method based on intensive deep learning, and belongs to the field of computer vision. The dense deep convolution neural network model provided by the invention comprises four sub-network models, and can realize local adaptive illumination enhancement, noise elimination, distortion restoration and semantic restoration on a dim light image. Firstly, decomposing a reflectivity graph and an illumination graph of a dim light image through a decomposition network; then, the illumination map guide information and the residual error thought obtained by the MAMI module are combined to learn the noise component with smaller space in the reflectivity map obtained by decomposition, and noise reduction is carried out; then recovering the semantic information of the reflectivity map through two layers of dense convolution neural networks and loss functions with semantic recovery items, and repairing color distortion and the like; introducing the proportion information of the illumination graph to be enhanced and the target illumination graph and the gradient information of the illumination graph to be enhanced into an illumination adjusting network to realize flexible adjustment of global brightness and self-adaptive enhancement of local brightness; and finally, synthesizing the enhanced illuminance map and the recovered reflectivity map to obtain a target image. The method can remove noise, repair distortion and semantics while adaptively adjusting the brightness of the image, so that the enhanced image is excellent in image aesthetic evaluation indexes and superior to other methods in many computer vision tasks.
Description
Technical Field
The invention relates to a dark light image self-adaptive enhancement method based on intensive deep learning, which is used for performing semantic recovery and self-adaptive enhancement on an image in a dark light environment, realizing illumination enhancement, noise elimination, distortion restoration and semantic restoration of a dark light image, and belongs to the field of computer vision.
Background
In recent years, software and hardware such as the internet and intelligent terminals have been rapidly developed, and images have become a huge and important data resource. As is well known, image information plays a great role in the production and life of people, and is used as an important information carrier to promote the communication of information and help people to more intuitively recognize the world. However, in real life, a large amount of dark-light images exist in a large amount of image data due to limitations of natural conditions and technical conditions. The dim light images have the problems of low brightness, poor visibility, noise, distortion and the like, so that the requirements of people on image quality are difficult to meet, and the use of subsequent computer vision algorithms, such as target detection, example segmentation and the like, is seriously influenced.
In order to solve these problems, many dark light image enhancement techniques have been proposed, including a filtering-based method, a decomposition-based method, and a deep learning-based method. While these methods perceptually improve the quality of the image, they ignore the "machine" perception. The methods enhance the visibility of images and introduce various unnatural effects such as local over-enhancement and color distortion, and most fatally, the methods neglect the recovery of image semantics, which is the key point for understanding the images by computer vision algorithms.
Therefore, it becomes important to solve the adaptive enhancement and semantic restoration of the dim image. The deep learning technology rises rapidly with the continuous development of hardware, complex mapping is realized by utilizing a continuously deepened convolutional neural network, the network model is continuously optimized through inverse gradient propagation, and finally an accurate and complex mapping is obtained. The deep learning fully utilizes a large amount of data and hardware computing power, can continuously iterate and update, does not need an accurate prior model, and enables the dark light image to be adaptively enhanced, local over-enhancement to be eliminated, and image semantics to be restored.
Therefore, in order to make the image under the dark light environment be widely and effectively used in the computer vision task, the adaptive enhancement research of the dark light image based on the deep learning becomes very important and meaningful.
Disclosure of Invention
The invention aims to design a dark light image self-adaptive enhancement method based on intensive deep learning, and aims to solve the unnatural effects such as local excessive enhancement and distortion and restore the deep semantics of an image, so that the dark light image is improved in human impression indexes and can be easily used in a computer vision algorithm.
The technical scheme of the invention is as follows: firstly, preparing a training data set, a verification data set and a test data set; then designing a dense deep convolution neural network model, wherein the dense deep convolution neural network model comprises a decomposition network, a noise reduction network, a recovery network and a gradient self-adaptive illumination adjusting network; then designing a loss function for training the model; training a decomposition network to obtain a reflectivity and illumination map, training a noise reduction network to obtain a reflectivity map after noise reduction, training a reply network to obtain a reflectivity map for semantic recovery and distortion restoration, and training a gradient adaptive illumination adjusting network to obtain an illumination map capable of flexibly adjusting the overall illumination and the local illumination; and finally, synthesizing the repaired illuminance map and the repaired reflectivity map. The specific process is as follows:
and a data set preparation stage, wherein a data set for training the network model is formed by the image pair form of a normal illumination image and a low illumination image, the image under normal illumination is used as a training label, and the low illumination image is used as the input of the network model. In order to save time and cost while ensuring the quality of the data set, the low-light images in a part of image pairs are shot after the exposure degree is adjusted by the camera.
And in the dense deep convolutional neural network model design stage, the dense deep convolutional neural network model consists of 4 sub-networks, namely a decomposition network, a noise reduction network, a recovery network and a gradient adaptive illumination adjusting network. The decomposition network is inspired by Retinex theory, and the characteristics of all context information can be fully utilized by utilizing the dense convolutional neural network to decouple the original image into a reflectivity graph and an illumination graph; the noise reduction network consists of a dense convolutional neural network, residual error connection and a multi-depth attention module (MAMI) module and aims to remove noise components in a reflectivity graph by taking illumination as guidance and noise components as residual error quantities; the recovery network is composed of two residual error dense convolution neural network blocks, and the shallow semantic features of the decomposition network are fused to recover the semantic information of the image and remove distortion; the gradient self-adaptive illumination adjusting network adopts gradient information of an illumination map and proportion information of a label illumination map as keys for adjusting the low-illumination image, and the adjusting sub-network consists of 4 layers of simple convolutional neural networks. The structure of each sub-network and the structure of the overall model are shown in the figures, respectively.
In the loss function design stage, corresponding loss functions are designed respectively for a decomposition network, a noise reduction network, a recovery network and a gradient adaptive illumination adjustment network, and similarity loss, smoothness loss and reconstruction loss are designed in the decomposition loss functions; MSE loss is designed in the noise reduction loss function; structural consistency loss, semantic recovery loss and similarity loss are designed in the recovery loss function; the gradient consistency loss is designed in the adjustment loss function.
In the specific training stage, a model is trained under a TensorFlow framework by using hardware Nvidia GTX 2080Ti GPU and an Intel Core i7-9700k CPU and an Adam training strategy.
Drawings
FIG. 1 is a diagram of a decomposition subnetwork structure of the proposed model of the method;
FIG. 2 is a diagram of a noise reduction sub-network of the proposed model;
FIG. 3 is a diagram of an attention module of a model proposed by the present method;
FIG. 4 is a diagram of a recovery subnetwork structure of the proposed model of the method;
FIG. 5 is a diagram of a gradient adaptive illumination adjustment sub-network structure of a model proposed by the present method;
fig. 6 is an overall structure diagram of the model proposed by the present method.
Detailed Description
The invention designs a dark light image self-adaptive enhancement method based on intensive deep learning, which comprises the following concrete implementation processes:
step 1: for a dim light image dataset, not only a dim light image, i.e., an image with uneven illumination and insufficient illumination, but also an image with normal illumination in a corresponding scene is required as a label of a training model. In other words, the training set is paired in pairs of images. Firstly, shooting indoor and outdoor normal lighting scenes by using a camera with an adjustable exposure degree, adjusting the exposure degree in the corresponding scenes, and shooting a low-exposure image in a real scene; for the robustness of the model, a low-illumination image with a normal exposure degree, namely an image with insufficient and uneven natural illumination is shot in a corresponding scene. After data was collected again, the data was read at 8: 1: 1, dividing a training data set, a verification data set and a test data set.
Step 2: according to the Retinex theory and the decomposition model with noise, an intensive depth convolution neural network model and a corresponding loss function which can make full use of context information are designed, so that not only can shallow features be well utilized to repair structural information, but also deep features can be utilized to repair damaged semantic information, and meanwhile, local and global illumination can be flexibly adjusted. The Retinex model and the band noise model are mathematically expressed as follows:
where S denotes an original image, R denotes a decomposed reflectance map, L denotes a decomposed illuminance map, and N denotes a noise component. The network model comprises 4 sub-networks, namely a decomposition network, a noise reduction network, a recovery network and a gradient self-adaptive illumination adjusting network. The loss function design comprises a loss function of 4 sub-networks, and the loss L is decomposeddecL of noise reductiondenRecovery of loss LresAdaptive adjustment of loss Ladj。
Step 2.1: the network is decomposed. The network model is formed by densely connecting 5 layers of convolutions, so that context information is fully transmitted, residual error connection is used for preventing gradient from disappearing, optimization training is carried out, and the characteristic F of the dense convolutional neural network of the first layer is obtained1Finally from F1The branches produce a reflectivity map and an illuminance map. Using training set image pairs [ S ]l,Sh]Respectively inputting into decomposition network to obtain reflectivity graph R of normal illumination imagehAnd illuminance chart LhReflectivity map R of low-light imagelAnd illuminance chart LlAnd constraining a decomposition result by utilizing reconstruction loss, similarity loss and smoothness loss to finally obtain an illumination map which not only retains important structural information, but also is relatively smooth and a reflectivity map which contains more detailed information and a large amount of noise. Decomposition loss LdecThe mathematical formula is as follows:
wherein the content of the first and second substances,represents LlA corresponding gradient map is generated for each of the three phases,represents LhA corresponding gradient map is generated for each of the three phases,denotes SlA corresponding gradient map is generated for each of the three phases,denotes ShCorresponding gradient map, | · | | non-conducting phosphor2Represents the L2 paradigm, | · | | non-woven1Represents an L1 paradigm.
Step 2.2: a noise reduction network. The network is designed by a 4-layer convolution dense connection and illumination map attention module, and noise components with small space are used as learning objects, so that the illumination map attention module is utilizedThe guiding function of the block (MAMI) on noise learning also reduces the difficulty of learning. Wherein the MAMI module is a lightweight attention extraction network composed of 4 layers of simple convolutions. The R in step 2.1lFeeding into a noise reduction network, LlFeeding into MAMI module, and using R in step 2.1hAnd noise reduction loss to supervise learning. Loss of noise reduction LdenThe mathematical formula is as follows:
wherein the content of the first and second substances,a reflectance map after noise reduction is shown.
Step 2.3: and restoring the network. The network needs to recover not only the distortion still existing but also the damaged semantics caused by the decoupling of the decomposition network, so that all the characteristics from the shallow layer to the deep layer need to be fully utilized. Therefore, R obtained in step 2.2lSending the data into a recovery network, extracting features by adopting 2 residual error dense convolution neural network blocks, wherein each network block adopts a structure of 4 layers of dense convolution and residual error connection, and training by fusing the input of two network blocks with the attention of an illumination map, and the output of the two network blocks is the feature F of the second layer and the third layer of dense convolution neural network2,F3. And finally, fusing three layers of characteristic information, and outputting the recovered reflectivity graph R through the convolution layer and the Sigmoid layerres. Recovery loss LresThe mathematical formula is as follows:
wherein the content of the first and second substances,representing the recovered reflectivity map, SSIM (-) representing the structural similarity function of the two reflectivity maps, Φi(. cndot.) represents the i-th layer feature diagram of a VGG-16 network. Recovery from loss and useThe process of extracting the features by simulating a computer vision algorithm through the VGG-16 feature extraction of the network is matched with three layers of fused feature information, so that the reflectivity graph finally output by the recovery network not only eliminates a large amount of noise, but also restores semantics and distortion.
Step 2.4: a gradient adaptive illumination adjustment network. The network consists of 4 layers of ordinary volumes and one layer of Sigmoid, and aims to utilize the ratio of the label illumination to the input low illumination mapAnd low illumination map gradientLearning a mapping model with the ability to flexibly adjust the illumination level and capable of adapting to local illumination enhancement. The low light level map L in step 2.1lFusion ratioAnd gradient of illuminance mapInformation is input into the regulating network to obtain the regulated illumination map Lbr. Adjustment of loss LadjThe mathematical formula is as follows:
wherein the content of the first and second substances,showing the graph of the illuminance after the adjustment,a gradient diagram showing the adjusted illuminance diagram.
And step 3: the restored reflectivity map, i.e. R obtained in step 2.3resIllumination map L after gradient adaptive adjustmentbrPerforming Retinex modeThe synthesis gives the final enhancement result SenhThe resulting enhancement not only meets the aesthetic evaluation of the image, but also performs well in various computer vision tasks.
Claims (6)
1. A dark light image self-adaptive enhancement method based on dense deep learning is disclosed. The method is characterized in that a dense depth convolution neural network model is designed for adaptive enhancement, and the dense depth convolution neural network is adopted to decompose a low-illumination image to obtain a reflectivity image and an illumination image of the low-illumination image; guiding a noise reduction network to reduce the noise of the obtained reflectivity map through the obtained illumination map and the MAMI module to obtain the reflectivity map after noise reduction; then, denoising, restoring distortion and restoring semantics are further carried out on the denoised reflectivity graph through a restoring network to obtain a restored reflectivity graph; then adjusting the illumination map by the decomposed illumination map through a gradient self-adaptive illumination adjusting network to obtain a local self-adaptive enhanced illumination map; and finally, synthesizing the enhanced illuminance map and the recovered reflectivity map to obtain a target image.
2. The dense deep convolutional neural network model and corresponding loss function of claim 1, wherein said model comprises four sub-network models and corresponding loss functions:
1) adopting a dense convolutional neural network, a residual structure and a decomposition network model of a decomposition loss function;
2) learning a noise reduction network model of the noise component by adopting a residual error thought and a noise reduction loss function;
3) adopting a dense convolutional neural network block and a recovery network model with a loss function of semantic recovery;
4) and adopting an illumination gradient information and an adjusting network model of a self-adaptive gradient illumination adjusting loss function.
3. The decomposition network model of claim 2, wherein a dense convolutional neural network is used to extract depth feature information, introduce residual connecting structure to optimize gradient propagation and training, and design decomposition loss function.
4. The noise reduction network model according to claim 2, characterized in that a dense convolutional neural network and residual concepts are employed, and an attention module MAMI is used to guide the learning of residual noise and design the noise reduction loss function.
5. The recovery network model of claim 2, wherein two dense convolutional neural networks and residual structures are adopted, feature information of each layer is fused to recover the reflectivity map, and a loss function for recovering semantic features is designed to repair image semantics, so that the reflectivity map recovered by the model is better than the reflectivity map recovered by other methods in computer vision tasks.
6. The gradient adaptive illumination adjustment network model according to claim 2, wherein gradient map information of an illumination map is introduced to realize local adaptive enhancement to avoid excessive enhancement, and meanwhile ratio information of target illumination and low illumination is introduced to realize flexible adjustment of overall illumination.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110251337.XA CN112767286A (en) | 2021-03-08 | 2021-03-08 | Dark light image self-adaptive enhancement method based on intensive deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110251337.XA CN112767286A (en) | 2021-03-08 | 2021-03-08 | Dark light image self-adaptive enhancement method based on intensive deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112767286A true CN112767286A (en) | 2021-05-07 |
Family
ID=75690810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110251337.XA Pending CN112767286A (en) | 2021-03-08 | 2021-03-08 | Dark light image self-adaptive enhancement method based on intensive deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767286A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192055A (en) * | 2021-05-20 | 2021-07-30 | 中国海洋大学 | Harmonious method and model for synthesizing image |
CN113256528A (en) * | 2021-06-03 | 2021-08-13 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN113744164A (en) * | 2021-11-05 | 2021-12-03 | 深圳市安软慧视科技有限公司 | Method, system and related equipment for enhancing low-illumination image at night quickly |
CN114004761A (en) * | 2021-10-29 | 2022-02-01 | 福州大学 | Image optimization method integrating deep learning night vision enhancement and filtering noise reduction |
CN114399431A (en) * | 2021-12-06 | 2022-04-26 | 北京理工大学 | Dim light image enhancement method based on attention mechanism |
-
2021
- 2021-03-08 CN CN202110251337.XA patent/CN112767286A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192055A (en) * | 2021-05-20 | 2021-07-30 | 中国海洋大学 | Harmonious method and model for synthesizing image |
CN113192055B (en) * | 2021-05-20 | 2023-01-17 | 中国海洋大学 | Harmonious method and model for synthesizing image |
CN113256528A (en) * | 2021-06-03 | 2021-08-13 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN113256528B (en) * | 2021-06-03 | 2022-05-27 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN114004761A (en) * | 2021-10-29 | 2022-02-01 | 福州大学 | Image optimization method integrating deep learning night vision enhancement and filtering noise reduction |
CN113744164A (en) * | 2021-11-05 | 2021-12-03 | 深圳市安软慧视科技有限公司 | Method, system and related equipment for enhancing low-illumination image at night quickly |
CN114399431A (en) * | 2021-12-06 | 2022-04-26 | 北京理工大学 | Dim light image enhancement method based on attention mechanism |
CN114399431B (en) * | 2021-12-06 | 2024-06-04 | 北京理工大学 | Dim light image enhancement method based on attention mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112767286A (en) | Dark light image self-adaptive enhancement method based on intensive deep learning | |
CN108875935B (en) | Natural image target material visual characteristic mapping method based on generation countermeasure network | |
CN113313644B (en) | Underwater image enhancement method based on residual double-attention network | |
CN114627006B (en) | Progressive image restoration method based on depth decoupling network | |
CN114066747B (en) | Low-illumination image enhancement method based on illumination and reflection complementarity | |
CN111738948B (en) | Underwater image enhancement method based on double U-nets | |
CN113052814B (en) | Dim light image enhancement method based on Retinex and attention mechanism | |
CN113034413B (en) | Low-illumination image enhancement method based on multi-scale fusion residual error coder-decoder | |
CN116152120A (en) | Low-light image enhancement method and device integrating high-low frequency characteristic information | |
CN115829876A (en) | Real degraded image blind restoration method based on cross attention mechanism | |
CN116958534A (en) | Image processing method, training method of image processing model and related device | |
CN115953311A (en) | Image defogging method based on multi-scale feature representation of Transformer | |
CN114862707A (en) | Multi-scale feature recovery image enhancement method and device and storage medium | |
CN114881879A (en) | Underwater image enhancement method based on brightness compensation residual error network | |
CN114862697A (en) | Face blind repairing method based on three-dimensional decomposition | |
Chen et al. | CERL: A unified optimization framework for light enhancement with realistic noise | |
Liu et al. | WSDS-GAN: A weak-strong dual supervised learning method for underwater image enhancement | |
CN117689592A (en) | Underwater image enhancement method based on cascade self-adaptive network | |
CN116503286A (en) | Retinex theory-based low-illumination image enhancement method | |
CN116645281A (en) | Low-light-level image enhancement method based on multi-stage Laplace feature fusion | |
CN116452431A (en) | Weak light image enhancement method based on multi-branch progressive depth network | |
Kumar et al. | Underwater image enhancement using deep learning | |
Liang et al. | Multi-scale and multi-patch transformer for sandstorm image enhancement | |
CN115829868A (en) | Underwater dim light image enhancement method based on illumination and noise residual error image | |
CN115376066A (en) | Airport scene target detection multi-weather data enhancement method based on improved cycleGAN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |