CN112927164B - No-reference low-illumination image enhancement method based on deep convolutional neural network - Google Patents

No-reference low-illumination image enhancement method based on deep convolutional neural network Download PDF

Info

Publication number
CN112927164B
CN112927164B CN202110304197.8A CN202110304197A CN112927164B CN 112927164 B CN112927164 B CN 112927164B CN 202110304197 A CN202110304197 A CN 202110304197A CN 112927164 B CN112927164 B CN 112927164B
Authority
CN
China
Prior art keywords
image
component
reflection component
illumination
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110304197.8A
Other languages
Chinese (zh)
Other versions
CN112927164A (en
Inventor
陈勇
陈东
刘焕淋
金曼莉
汪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110304197.8A priority Critical patent/CN112927164B/en
Publication of CN112927164A publication Critical patent/CN112927164A/en
Application granted granted Critical
Publication of CN112927164B publication Critical patent/CN112927164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a no-reference low-illumination image enhancement method based on a deep convolutional neural network, and belongs to the field of image processing. First, a feature extraction module including two branches is constructed using a deep convolutional neural network, and an illumination component and a reflection component are extracted from an input low-illuminance image. And then denoising the reflection component, and fusing the reflection component into an optimization network to obtain the optimized reflection component. Then, the illumination component is input into an optimization network to obtain an optimized illumination component. And finally, multiplying the optimized illumination component and the optimized reflection component to obtain a final enhancement result. The invention fully utilizes the reflection component extracted from the input image, effectively reduces the noise interference in the image and improves the detail expression capability.

Description

No-reference low-illumination image enhancement method based on deep convolutional neural network
Technical Field
The invention belongs to the field of image processing, and relates to a no-reference low-illumination image enhancement method based on a deep convolutional neural network.
Background
Low light environments generally refer to poor lighting conditions, such as cloudy, nighttime, indoor, and the like. In addition, imaging noise is inevitably introduced during imaging by the photographing apparatus. These noises are further exacerbated by low-light conditions, and therefore, it is necessary to properly enhance and denoise the low-light images.
At present, low-illumination image enhancement methods can be mainly divided into traditional methods and deep learning methods. Typical methods include a histogram equalization method and a Retinex theory-based method, in which the histogram equalization method extends the dynamic range of an image by adjusting the gray scale of the image to be evenly distributed in the entire gray scale. The method based on Retinex theory mainly starts from Retinex theory, and extracts an illumination component and a reflection component from a low-illumination image according to the theory, and because the reflection component of an image is considered to represent the essential attributes of a shot scene in Retinex theory and is not influenced by illumination change, the extracted reflection component is generally used as an enhancement result. The traditional method is limited by the constraint of prior knowledge, and the enhancement effect is not always good.
Due to the fact that deep learning has achieved unusual achievements in many fields in recent years, the adoption of a deep learning method to realize low-illumination image enhancement has become an important trend, and a convolutional neural network is the most prominent trend. According to the method, under the constraint of a loss function, the constructed deep neural network realizes the mapping from input to output through the training of a large amount of data. According to the difference of training modes, the training mode can be divided into reference training and non-reference training. Most of the existing papers for realizing low-illumination image enhancement by adopting a deep learning method mostly adopt a training mode with reference images, so that a definite optimization direction can be established for a network, the training difficulty is low, but the enhancement result is greatly limited by the quality of the reference images, and a plurality of difficulties exist in the collection and production of paired images. The no-reference training mode is still in the preliminary stage of research, but has the advantages that the data set is simple to manufacture, the sample size is enough and abundant, the limitation of the enhancement result is smaller, the effect exceeding that of the reference image can be obtained, and the like, and the prior knowledge of the color, the exposure and the space structure of the low-illumination image and the normal-illumination image can be researched by a student as the guidance of the no-reference training mode. Another big problem faced by low-illumination image enhancement is how to avoid that noise in the original image is not amplified synchronously due to the enhancement. In this regard, there are methods to use image denoising techniques as an enhanced pre-processing procedure or post-processing procedure, which may result in loss of detail information in the image. The other idea is to merge denoising into the image enhancement process, but this brings new problems to the design of the network and the loss function.
Disclosure of Invention
In view of the above, the present invention provides a method for enhancing a low-illumination image without reference based on a deep convolutional neural network. In order to improve the retention capability of detail information in an enhancement result and the inhibition capability of noise in an original image, the method utilizes a convolutional neural network to extract an illumination component and a reflection component from an input low-illumination image, adds the denoising effect of a denoising algorithm in the adjustment process of the reflection component, enables the network to have the capability of inhibiting the noise, properly constrains the texture structure of an output result in a loss function, and better retains the detail information in the input image by matching with a non-reference training mode.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for enhancing a no-reference low-illumination image based on a deep convolutional neural network comprises the following steps:
s1: constructing a data set for network training according to the requirements on the low-illumination image, and performing certain pretreatment;
s2: constructing a decomposition network based on a deep convolutional neural network to realize the extraction of a reflection component and an illumination component of an input image;
s3: constructing an optimization network for processing the reflection component, and realizing refinement of the reflection component and noise suppression;
s4: constructing an optimization network for processing the irradiation component, and realizing further optimization of the irradiation component;
s5: and multiplying the optimized reflection component and the optimized illumination component to obtain an enhanced image, adjusting the training process by using an optimizer, and training the network according to the calculated value of the loss function so as to make the network converge to the optimal effect.
Optionally, the S1 specifically includes the following steps:
s11: performing color space conversion on an input image, and reserving an image with an average brightness value smaller than a set threshold value so as to construct a training image library;
s12: scaling an image with a size inconsistent with a preset size in the image library according to a certain proportion;
s13: and randomly selecting images from the image library, and randomly turning the images to form a final training image library.
Optionally, the S2 specifically includes the following steps:
s21: constructing a decomposition network based on a deep convolutional neural network, extracting a reflection component and an illumination component of an input image and obtaining a corresponding characteristic map;
s22: the feature map activation level is adjusted using a size-invariant convolution module.
Optionally, the S3 specifically includes the following steps:
s31: carrying out down-sampling operation on the input reflection component, and then carrying out scale invariant feature transformation and up-sampling operation;
s32: and (4) adjusting the reflection component and scaling the size of the reflection component by using a denoising algorithm, and simultaneously changing the size of the reflection component to be consistent with the size of the reflection component subjected to different operations.
Optionally, the S4 specifically includes the following steps:
s41: constructing an irradiation component optimization network based on the size invariant convolution;
s42: and inputting the illumination component characteristic map of the image into an optimization network to adjust related parameters.
Optionally, the S5 specifically includes the following steps:
s51: multiplying the optimized illumination component and the optimized reflection component to obtain an enhancement result;
s52: establishing no reference loss according to prior knowledge of the low-illumination image and the normal-illumination image on color, spatial structure and exposure degree;
s53: inputting the enhancement result into the constructed non-reference loss, and calculating a loss value;
s54: updating network parameters by using an optimizer and adopting a gradient back propagation algorithm;
s55: and performing multi-round training on the model until the effect meets the requirement so as to enhance the low-illumination image.
The invention has the beneficial effects that: the method considers the consistency of the low-illumination image and the normal-illumination image in the change trend of the texture structure and combines the characteristics of the reflection component, so that the texture detail information in the enhancement result is richer. In addition, the denoising effect of the denoising algorithm is integrated into the network, so that the network has good detail improvement capability and can well inhibit noise. In addition, the non-reference training mode adopted by the invention can alleviate the difficulty of constructing the training data set to a certain extent and is convenient to implement.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a network model structure of the present invention;
FIG. 2 is a model structure of a decomposed network;
FIG. 3 is a model structure of a reflection component optimization network;
FIG. 4 is a model structure of an illumination component optimization network;
FIG. 5 is a schematic diagram of the enhancement effect of the present invention; (a) is a low illumination image; and (b) is the image of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
A no-reference low-illumination image enhancement method based on a deep convolutional neural network is used for achieving low-illumination image enhancement based on Retinex theory. First, an illumination component and a reflection component are extracted from an input low illuminance image using a decomposition network. The illumination component and the reflection component are then optimized separately using an optimization network. And finally, multiplying the two optimized components to obtain a result.
The decomposition network extracts the characteristic graphs of the illumination component and the reflection component of the input image, denoising the reflection component by using a denoising algorithm to obtain a denoised reflection component, then scaling the denoised reflection component to keep the scale of the denoised reflection component consistent with the scale of the input reflection component characteristic graph, inputting the reflection component into the reflection component optimization network together to obtain an optimized reflection component, and simultaneously inputting the illumination component into the illumination component optimization network with invariable convolution of the size to obtain the inversely optimized illumination component. And finally, multiplying the optimized illumination component and the optimized reflection component to obtain an enhanced image.
The network model structure of the no-reference image enhancement method based on the deep convolutional neural network is shown in fig. 1, and specifically comprises the following steps:
1. construction of training image data sets
(1) And performing color space conversion on the input image, and listing the image with the average brightness lower than a set threshold value as a low-illumination image.
(2) And expanding the training image by using modes of zooming, rotating, horizontally turning and the like, and improving the effectiveness of the characteristics learned by the model.
2. Component extraction
The component extraction network of the present invention uses a decomposition network constructed based on a deep convolutional neural network to extract an illumination component and a reflection component in an input image and obtain corresponding feature maps, as shown in fig. 2.
3. Reflection component optimization
(1) As shown in fig. 3, the reflection component optimization network of the present invention performs down-sampling on a reflection component, and then performs scale invariant feature transformation and up-sampling.
(2) And (4) adjusting the reflection component by using a denoising algorithm, scaling the reflection component, and changing the size of the reflection component to keep the same as the size of the reflection component after different operations.
4. Illumination component optimization
The illumination component optimization network of the present invention is shown in fig. 4, and obtains optimized illumination components by performing several size-invariant convolution processes on the illumination components.
5. Determination of enhanced results
Shown in fig. 5 is a schematic representation of the enhancement results of the present invention. And multiplying the optimized illumination component and the optimized reflection component to obtain an enhanced result. (a) is a low illumination image; and (b) is the image of the present invention.
The invention designs a pedestrian detection method combining head and overall information, which mainly comprises two stages of training and testing.
(1) Training phase
The training phase mainly comprises feature extraction and updating of model weight parameters. Training the model by using the preprocessed image, substituting the result obtained by model calculation into a loss function to calculate various loss values, and updating the weight parameters of the model by using a gradient back propagation algorithm according to the total loss value. And when the iteration times of the model reach a preset value, terminating the training process and storing the weight parameters.
(2) Testing phase
The testing stage needs to load the trained model weight parameters, and only uses the scaling mode for the input image to make the size of the input image meet the input requirement of the model. At the moment, the model does not perform gradient back propagation any more, but directly outputs the enhancement result so as to enhance the input low-illumination image.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (1)

1. A no-reference low-illumination image enhancement method based on a deep convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
s1: constructing a data set for network training according to the requirements on the low-illumination image, and performing certain preprocessing;
s2: constructing a decomposition network based on a deep convolutional neural network to realize the extraction of a reflection component and an illumination component of an input image;
s3: constructing an optimization network for processing the reflection component, and realizing refinement of the reflection component and noise suppression;
s4: constructing an optimization network for processing the irradiation component, and realizing further optimization of the irradiation component;
s5: multiplying the optimized reflection component and the optimized irradiation component to obtain an enhanced image, adjusting the training process by using an optimizer, and training the network according to the calculated value of the loss function so as to converge the network to the best effect;
the S1 specifically comprises the following steps:
s11: performing color space conversion on an input image, and reserving an image with an average brightness value smaller than a set threshold value so as to construct a training image library;
s12: scaling an image with a size inconsistent with a preset size in the image library according to a certain proportion;
s13: randomly selecting images from the image library, and randomly turning the images to form a final training image library;
the S2 specifically comprises the following steps:
s21: constructing a decomposition network based on a deep convolutional neural network, extracting a reflection component and an illumination component of an input image and obtaining a corresponding characteristic map;
s22: adjusting the activation degree of the feature map by using a size-invariant convolution module;
the S3 specifically comprises the following steps:
s31: carrying out down-sampling operation on the input reflection component, and then carrying out scale invariant feature transformation and up-sampling operation;
s32: adjusting the reflection component by using a denoising algorithm, scaling the scale of the reflection component, and simultaneously changing the size of the reflection component to keep the same as the size of the reflection component subjected to different operations;
the S4 specifically comprises the following steps:
s41: constructing an irradiation component optimization network based on the size invariant convolution;
s42: inputting an illumination component characteristic diagram of the image into an optimization network to adjust related parameters;
the S5 specifically comprises the following steps:
s51: multiplying the optimized illumination component and the optimized reflection component to obtain an enhancement result;
s52: establishing no reference loss according to prior knowledge in the field of color, space structure and exposure degree of the low-illumination image and the normal-illumination image;
s53: inputting the enhancement result into the constructed non-reference loss, and calculating a loss value;
s54: updating network parameters by using an optimizer and adopting a gradient back propagation algorithm;
s55: and performing multi-round training on the model until the effect meets the requirement so as to enhance the low-illumination image.
CN202110304197.8A 2021-03-22 2021-03-22 No-reference low-illumination image enhancement method based on deep convolutional neural network Active CN112927164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110304197.8A CN112927164B (en) 2021-03-22 2021-03-22 No-reference low-illumination image enhancement method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110304197.8A CN112927164B (en) 2021-03-22 2021-03-22 No-reference low-illumination image enhancement method based on deep convolutional neural network

Publications (2)

Publication Number Publication Date
CN112927164A CN112927164A (en) 2021-06-08
CN112927164B true CN112927164B (en) 2023-04-07

Family

ID=76175380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110304197.8A Active CN112927164B (en) 2021-03-22 2021-03-22 No-reference low-illumination image enhancement method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN112927164B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128768B (en) * 2023-04-17 2023-07-11 中国石油大学(华东) Unsupervised image low-illumination enhancement method with denoising module

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578383A (en) * 2017-08-29 2018-01-12 北京华易明新科技有限公司 A kind of low-light (level) image enhancement processing method
CN108596082A (en) * 2018-04-20 2018-09-28 重庆邮电大学 Human face in-vivo detection method based on image diffusion velocity model and color character
CN110163818A (en) * 2019-04-28 2019-08-23 武汉理工大学 A kind of low illumination level video image enhancement for maritime affairs unmanned plane
CN110276729A (en) * 2019-06-10 2019-09-24 浙江工业大学 A kind of Enhancement Method of low-luminance color image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009212824A (en) * 2008-03-04 2009-09-17 Funai Electric Co Ltd Skin area detection imaging apparatus
JP6581068B2 (en) * 2016-11-11 2019-09-25 株式会社東芝 Image processing apparatus, image processing method, program, operation control system, and vehicle
CN110232661B (en) * 2019-05-03 2023-01-06 天津大学 Low-illumination color image enhancement method based on Retinex and convolutional neural network
CN112381897B (en) * 2020-11-16 2023-04-07 西安电子科技大学 Low-illumination image enhancement method based on self-coding network structure
CN112465726A (en) * 2020-12-07 2021-03-09 北京邮电大学 Low-illumination adjustable brightness enhancement method based on reference brightness index guidance
CN112465727A (en) * 2020-12-07 2021-03-09 北京邮电大学 Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578383A (en) * 2017-08-29 2018-01-12 北京华易明新科技有限公司 A kind of low-light (level) image enhancement processing method
CN108596082A (en) * 2018-04-20 2018-09-28 重庆邮电大学 Human face in-vivo detection method based on image diffusion velocity model and color character
CN110163818A (en) * 2019-04-28 2019-08-23 武汉理工大学 A kind of low illumination level video image enhancement for maritime affairs unmanned plane
CN110276729A (en) * 2019-06-10 2019-09-24 浙江工业大学 A kind of Enhancement Method of low-luminance color image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Dubravko Culibrk等.Neural Network Approach to Background Modeling for Video Object Segmentation.《IEEE Transactions on Neural Networks》.2007,第18卷(第6期),全文. *

Also Published As

Publication number Publication date
CN112927164A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN113313657B (en) Unsupervised learning method and system for low-illumination image enhancement
CN108564549B (en) Image defogging method based on multi-scale dense connection network
CN109872285A (en) A kind of Retinex low-luminance color image enchancing method based on variational methods
CN111968041A (en) Self-adaptive image enhancement method
CN110807742B (en) Low-light-level image enhancement method based on integrated network
Lepcha et al. A deep journey into image enhancement: A survey of current and emerging trends
CN103679173A (en) Method for detecting image salient region
CN112184646B (en) Image fusion method based on gradient domain oriented filtering and improved PCNN
CN109448019B (en) Adaptive method for smoothing parameters of variable-split optical flow model
CN113643201A (en) Image denoising method of self-adaptive non-local mean value
CN113129236A (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
CN114782298B (en) Infrared and visible light image fusion method with regional attention
CN110675379A (en) U-shaped brain tumor segmentation network fusing cavity convolution
CN115457249A (en) Method and system for fusing and matching infrared image and visible light image
CN112927164B (en) No-reference low-illumination image enhancement method based on deep convolutional neural network
Sonker et al. Comparison of histogram equalization techniques for image enhancement of grayscale images of dawn and dusk
CN117391981A (en) Infrared and visible light image fusion method based on low-light illumination and self-adaptive constraint
CN117274085A (en) Low-illumination image enhancement method and device
CN110084774B (en) Method for minimizing fusion image by enhanced gradient transfer and total variation
CN114155161B (en) Image denoising method, device, electronic equipment and storage medium
CN108171676A (en) Multi-focus image fusing method based on curvature filtering
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral
CN117635471A (en) Low-illumination image enhancement method for wireless capsule endoscope
CN105160635B (en) A kind of image filtering method based on fractional order differential estimation gradient field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant