CN112734673B - Low-illumination image enhancement method and system based on multi-expression fusion - Google Patents

Low-illumination image enhancement method and system based on multi-expression fusion Download PDF

Info

Publication number
CN112734673B
CN112734673B CN202110044526.XA CN202110044526A CN112734673B CN 112734673 B CN112734673 B CN 112734673B CN 202110044526 A CN202110044526 A CN 202110044526A CN 112734673 B CN112734673 B CN 112734673B
Authority
CN
China
Prior art keywords
illumination
image
low
scale
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110044526.XA
Other languages
Chinese (zh)
Other versions
CN112734673A (en
Inventor
林明星
代成刚
管志光
张东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110044526.XA priority Critical patent/CN112734673B/en
Publication of CN112734673A publication Critical patent/CN112734673A/en
Priority to LU500193A priority patent/LU500193B1/en
Application granted granted Critical
Publication of CN112734673B publication Critical patent/CN112734673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The utility model provides a low illumination image enhancement method and system based on multi-expression fusion, the scheme includes: acquiring a low-illumination image to be enhanced; inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion, and outputting an enhanced image; the convolutional neural network obtains multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits a relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics. The scheme can better fit the mathematical relation between the brightness and the original image by using a small number of parameters, thereby realizing excellent enhancement effect.

Description

Low-illumination image enhancement method and system based on multi-expression fusion
Technical Field
The disclosure belongs to the technical field of image enhancement, and particularly relates to a low-illumination image enhancement method and system based on multi-expression fusion.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Because of the superior performance of convolutional neural networks in image processing tasks, a number of convolutional neural networks have been developed to enhance low-light images. At present, most of low-illumination image enhancement convolutional neural networks are constructed on the basis of Retinex theory. The idea of this type of network is to learn the luminance component from the input image and remove the luminance component from the original image, thereby achieving low-luminance image enhancement. However, the inventor found that, because the mathematical relationship between the brightness of the image and the original image is complex, a network including many convolutional layers is required to fit the relationship between the brightness and the original image, and the excessive convolutional layers increase the network training cost on one hand and also affect the efficiency of the image enhancement processing on the other hand; in addition, since an accurate enhancement effect cannot be obtained if the number of convolutional layers is too small, the conventional method also requires a large amount of debugging work in determining a reasonable number of convolutional layers to fit the relationship between luminance and an original image.
Disclosure of Invention
The present disclosure provides a method and a system for enhancing a low-illumination image based on multi-expression fusion, which can better fit a mathematical relationship between luminance and an original image with only a small number of parameters, thereby achieving an excellent enhancement effect.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for enhancing a low-illumination image based on multi-expression fusion, including:
acquiring a low-illumination image to be enhanced;
inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion, and outputting an enhanced image;
the convolutional neural network obtains multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits a relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics.
Further, the relationship function between the low-illuminance image and the illuminance is specifically expressed as follows:
Figure BDA0002896666340000021
wherein z isc(x) Is the illuminance, w1,w2,w3,w4To fuse the weight maps, f is the multi-scale feature of the low-illumination image.
Further, the convolutional neural network comprises a multi-scale convolution module, a weight map acquisition module, an illumination calculation module and an enhanced image solving module.
Furthermore, the multi-scale convolution module generates 4 features of different scales of the low-illumination image through 4 parallel convolution layers, and meanwhile, two convolution layers are adopted to compress the multi-scale features to 3 channels after two layers of parallel convolution layers.
Further, in order to obtain more scale features, the multi-scale convolution module can also generate 16 scale features by using two layers of 4 parallel convolution layers, and simultaneously compress the multi-scale features to 3 channels by using two convolution layers after the two layers of parallel convolution layers.
Further, the weight map obtaining module includes two convolution layers, outputs a 12-channel weight map, and obtains 4 fused weight maps of 3 channels by separating the 12-channel weight map.
Further, the illumination calculation module receives output results of the multi-scale convolution module and the weight map acquisition module, and obtains an illumination calculation result by using a relation function between the low illumination image and the illumination.
Further, the enhanced image solving module may obtain an enhanced image by taking the low-illuminance image based on the Retinex theory as a product of the illuminance and the real image.
According to a second aspect of the embodiments of the present disclosure, there is provided a low-illumination image enhancement system based on multi-expression fusion, including:
the image acquisition module is used for acquiring a low-illumination image to be enhanced;
the image enhancement module is used for inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion and outputting an enhanced image;
the convolutional neural network obtains the multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits the relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the memory, where the processor implements the method for enhancing a low-illumination image based on multi-expression fusion when executing the program.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the method for enhancing a low-illuminance image based on multi-expression fusion.
Compared with the prior art, the beneficial effect of this disclosure is:
considering that the illumination can be represented as a certain combination of the multi-scale features of the original image according to the priori knowledge, the scheme disclosed by the disclosure considers the priori knowledge, innovatively applies a multi-expression fusion mode to fit the mathematical relationship between the original image and the illumination, and can better fit the mathematical relationship between the brightness and the original image by using a small number of parameters, thereby realizing an excellent enhancement effect.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a schematic structural diagram of a multi-expression fused low-illumination image enhancement convolutional neural network according to a first embodiment of the present disclosure;
fig. 2 is a schematic diagram of an experimental result of low-illuminance image enhancement according to a first embodiment of the disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The first embodiment is as follows:
the embodiment aims at a low-illumination image enhancement method based on multi-expression fusion.
A low-illumination image enhancement method based on multi-expression fusion comprises the following steps:
acquiring a low-illumination image to be enhanced;
inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion, and outputting an enhanced image;
the convolutional neural network obtains the multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits the relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics.
Specifically, for ease of understanding, the scheme of the present disclosure is described in detail below with reference to fig. 1:
theoretical basis
According to Retinex theory, an image can be expressed as the product of illumination and the real scene, as shown in equation (1).
Qc(x)=Zc(x)·Jc(x) (1)
Wherein x is pixel coordinate, c represents three channels of red, green and blue of image, and Jc(x) For real scenes, Qc(x) For low-light images, Zc(x) For illumination, '-' indicates a corresponding multiplication of matrix elements.
Fitting relation between (II) low-illumination image and illumination
At present, most low illumination enhancement networks directly learn illumination from an original image, but the mathematical relationship between illumination and the original image is complex, so that a large number of convolutional layers are needed to better fit the relationship between illumination and the original image. According to a priori knowledge, the illumination may be represented as some combination of multi-scale features of the original image. The prior knowledge is considered, and a multi-expression fusion mode is innovatively applied to fit the mathematical relation between the original image and the illumination intensity, as shown in a formula (2).
Figure BDA0002896666340000051
In the formula w1,w2,w3,w4Is a fusion weight map. f is a multi-scale feature obtained by performing multi-scale convolution on the original image. Since the mathematical mapping form of the multi-scale features to illumination is unknown, the present disclosure fuses several mathematical relationships that are commonly used, including: logarithm, exponent, power function, addition operation. In summary, equation (2) is further transformed to:
Figure BDA0002896666340000052
wherein M is [.]Is a multi-scale convolution module. Illuminance Zc(x) After obtaining, the image with enhanced brightness can be obtained according to the formula (1), as shown in the formula (4).
Figure BDA0002896666340000053
And (4) constructing a convolutional neural network to enhance the low-illumination image according to the formula (4).
Construction of (tri) convolutional neural network
The convolutional neural network structure of the present disclosure is shown in fig. 1, and the convolutional neural network includes a multi-scale convolution module, a weight map acquisition module, an illuminance calculation module, and an enhanced image acquisition module. The multi-scale convolution module generates 4 features with different scales of the low-illumination image through 4 parallel convolution layers, and simultaneously compresses the multi-scale features to 3 channels by adopting two convolution layers after two layers of parallel convolution layers; in order to obtain more scale features, the multi-scale convolution module can also adopt two layers of 4 parallel convolution layers to generate 16 scale features, and simultaneously adopts two convolution layers to compress the multi-scale features to 3 channels after the two layers of parallel convolution layers; the weight map acquisition module comprises two convolution layers, outputs a weight map of 12 channels, and obtains fusion weight maps of 4 channels and 3 channels by separating the weight maps of 12 channels; the illumination calculation module receives output results of the multi-scale convolution module and the weight map acquisition module and obtains an illumination calculation result by using a relation function between the low-illumination image and the illumination; the enhanced image solving module can express the image with low illumination intensity as the product of illumination intensity and a real image based on Retinex theory to obtain an enhanced image.
(IV) network training
The loss function used to train the network is shown in equation (5). The loss function includes two terms: a mean square error term and a feature loss term. The mean square error term minimizes the difference between the pixels of the output image and the reference image. The feature loss term minimizes the difference between the output image and the high-level features of the reference image.
Figure BDA0002896666340000061
Gc(x) Is a reference image. T is a unit ofi[.]For the feature extractor, the present disclosure employs a VGG16 network as the feature extractor, and only layer 2, 14, 30 features are employed in order to increase training speed. I2Is L2 norm, | |1Is a norm of L1, and λ is an adjustment factor, and the weight used to adjust the two loss terms is set to 0.01.
And (3) training the network by adopting 500 images with natural illumination and synthesizing 500 images with low illumination.
The steps of synthesizing the low-illuminance images are shown in equations (6) and (7).
Figure BDA0002896666340000062
Figure BDA0002896666340000063
In the formula, H (x), S (x), V (x) represent H, S and V channels of a natural illumination image. HSV _ RGB { } denotes that HSV image is converted into RGB image, and cat [ ] denotes stacking operation. Gamma is a random factor between 0 and 1, and 1 random factor is generated for each natural illumination image.
In the training process, an Adam optimizer is adopted, the weight attenuation rate is 0.0001, the initial learning rate is 0.00005, and the learning rate is attenuated by 10 times every 30 cycles. The training batch and training period are 1 and 90, respectively. The network weights are initialized with a gaussian distribution.
(V) Experimental results show that
The method disclosed by the invention is used for testing the image in the common data set in the field of image enhancement, the low-illumination image test result is shown in fig. 2, and the original image has the characteristics of dark brightness, low contrast and the like. After the network provided by the patent is enhanced, the image brightness and the contrast are improved, the details are clear, the experimental result shows that the network can effectively improve the brightness of an atmospheric dark image, and only 0.041 second is needed for processing a 600 × 600 image on a computer of an Intel-i5 CPU and NVIDIA GTX 2080Ti GPU.
Example two:
the embodiment aims at a low-illumination image enhancement system based on multi-expression fusion.
A multi-expression fusion-based low-illumination image enhancement system, comprising:
the image acquisition module is used for acquiring a low-illumination image to be enhanced;
the image enhancement module is used for inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion and outputting an enhanced image;
the convolutional neural network obtains the multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits the relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics.
Example three:
the embodiment aims at providing an electronic device.
An electronic device comprising a memory, a processor and a computer program stored in the memory for execution, wherein the processor implements a multi-expression fusion-based low-illumination image enhancement method when executing the program, and the method comprises:
acquiring a low-illumination image to be enhanced;
inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion, and outputting an enhanced image;
the convolutional neural network obtains the multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits the relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics.
Example four:
it is an object of the present embodiments to provide a non-transitory computer-readable storage medium,
a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a multi-expression fusion-based low-illumination image enhancement method, comprising:
acquiring a low-illumination image to be enhanced;
inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion, and outputting an enhanced image;
the convolutional neural network obtains the multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits the relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics.
The low-illumination image enhancement method and system based on multi-expression fusion can be realized, and have wide application prospects.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the embodiments of the present disclosure have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present disclosure, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive changes in the technical solutions of the present disclosure.

Claims (4)

1. A low-illumination image enhancement method based on multi-expression fusion is characterized by comprising the following steps:
acquiring a low-illumination image to be enhanced;
inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion, and outputting an enhanced image;
the convolutional neural network comprises a multi-scale convolution module, a weight graph acquisition module, an illumination calculation module and an enhanced image solving module;
the multi-scale convolution module adopts two layers of 4 parallel convolution layers to generate 16-scale features, and simultaneously adopts two convolution layers to compress the multi-scale features to 3 channels after the two layers of parallel convolution layers;
the weight map acquisition module comprises two convolution layers, outputs a weight map of 12 channels, and obtains fusion weight maps of 4 channels and 3 channels by separating the weight maps of 12 channels;
the illumination calculation module receives output results of the multi-scale convolution module and the weight map acquisition module and obtains an illumination calculation result by using a relation function between the low illumination image and the illumination;
the enhanced image solving module can express the image with low illumination intensity as the product of illumination intensity and a real image based on Retinex theory to obtain an enhanced image;
the convolutional neural network obtains multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits a relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics;
the relationship function between the low-illumination image and the illumination is specifically expressed as follows:
Figure FDA0003606962270000011
wherein z isc(x) Is the illuminance, w1,w2,w3,w4To fuse the weight maps, f is the multi-scale feature of the low-illumination image.
2. A low-illumination image enhancement system based on multi-expression fusion, comprising:
the image acquisition module is used for acquiring a low-illumination image to be enhanced;
the image enhancement module is used for inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion and outputting an enhanced image;
the convolutional neural network comprises a multi-scale convolution module, a weight graph acquisition module, an illumination calculation module and an enhanced image solving module;
the multi-scale convolution module adopts two layers of 4 parallel convolution layers to generate 16-scale features, and simultaneously adopts two convolution layers to compress the multi-scale features to 3 channels after the two layers of parallel convolution layers;
the weight map acquisition module comprises two convolution layers, outputs a weight map of 12 channels, and obtains fusion weight maps of 4 channels and 3 channels by separating the weight maps of 12 channels;
the illumination calculation module receives output results of the multi-scale convolution module and the weight map acquisition module and obtains an illumination calculation result by using a relation function between the low illumination image and the illumination;
the enhanced image solving module can express the image with low illumination intensity as the product of illumination intensity and a real image based on Retinex theory to obtain an enhanced image; the convolutional neural network obtains multi-scale characteristics of the low-illumination image through multi-scale convolution, and fits a relation function between the low-illumination image and illumination by utilizing a multi-expression fusion mode based on the multi-scale characteristics;
the relationship function between the low-illumination image and the illumination is specifically expressed as follows:
Figure FDA0003606962270000021
wherein z isc(x) Is the illuminance, w1,w2,w3,w4To fuse the weight maps, f is the multi-scale feature of the low-illumination image.
3. An electronic device comprising a memory, a processor, and a computer program stored and executed on the memory, wherein the processor implements the method of enhancing a low-illumination image based on multi-expression fusion according to claim 1 when executing the program.
4. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method for low-illumination image enhancement based on multi-expression fusion according to claim 1.
CN202110044526.XA 2021-01-13 2021-01-13 Low-illumination image enhancement method and system based on multi-expression fusion Active CN112734673B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110044526.XA CN112734673B (en) 2021-01-13 2021-01-13 Low-illumination image enhancement method and system based on multi-expression fusion
LU500193A LU500193B1 (en) 2021-01-13 2021-05-24 Low-illumination image enhancement method and system based on multi-expression fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110044526.XA CN112734673B (en) 2021-01-13 2021-01-13 Low-illumination image enhancement method and system based on multi-expression fusion

Publications (2)

Publication Number Publication Date
CN112734673A CN112734673A (en) 2021-04-30
CN112734673B true CN112734673B (en) 2022-06-21

Family

ID=75591516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110044526.XA Active CN112734673B (en) 2021-01-13 2021-01-13 Low-illumination image enhancement method and system based on multi-expression fusion

Country Status (2)

Country Link
CN (1) CN112734673B (en)
LU (1) LU500193B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012260B (en) 2023-02-23 2023-07-04 杭州电子科技大学 Low-light image enhancement method based on depth Retinex

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839245A (en) * 2014-02-28 2014-06-04 北京工业大学 Retinex night color image enhancement method based on statistical regularities
CN105654438A (en) * 2015-12-27 2016-06-08 西南技术物理研究所 Gray scale image fitting enhancement method based on local histogram equalization
CN108492271A (en) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 A kind of automated graphics enhancing system and method for fusion multi-scale information
CN108564549A (en) * 2018-04-20 2018-09-21 福建帝视信息科技有限公司 A kind of image defogging method based on multiple dimensioned dense connection network
CN110175964A (en) * 2019-05-30 2019-08-27 大连海事大学 A kind of Retinex image enchancing method based on laplacian pyramid
CN110414571A (en) * 2019-07-05 2019-11-05 浙江网新数字技术有限公司 A kind of website based on Fusion Features reports an error screenshot classification method
CN110852964A (en) * 2019-10-30 2020-02-28 天津大学 Image bit enhancement method based on deep learning
CN111260543A (en) * 2020-01-19 2020-06-09 浙江大学 Underwater image splicing method based on multi-scale image fusion and SIFT features

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077512B (en) * 2012-10-18 2015-09-09 北京工业大学 Based on the feature extracting and matching method of the digital picture that major component is analysed
CN110717497B (en) * 2019-09-06 2023-11-07 中国平安财产保险股份有限公司 Image similarity matching method, device and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839245A (en) * 2014-02-28 2014-06-04 北京工业大学 Retinex night color image enhancement method based on statistical regularities
CN105654438A (en) * 2015-12-27 2016-06-08 西南技术物理研究所 Gray scale image fitting enhancement method based on local histogram equalization
CN108492271A (en) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 A kind of automated graphics enhancing system and method for fusion multi-scale information
CN108564549A (en) * 2018-04-20 2018-09-21 福建帝视信息科技有限公司 A kind of image defogging method based on multiple dimensioned dense connection network
CN110175964A (en) * 2019-05-30 2019-08-27 大连海事大学 A kind of Retinex image enchancing method based on laplacian pyramid
CN110414571A (en) * 2019-07-05 2019-11-05 浙江网新数字技术有限公司 A kind of website based on Fusion Features reports an error screenshot classification method
CN110852964A (en) * 2019-10-30 2020-02-28 天津大学 Image bit enhancement method based on deep learning
CN111260543A (en) * 2020-01-19 2020-06-09 浙江大学 Underwater image splicing method based on multi-scale image fusion and SIFT features

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A weighted variational model for simultaneous reflectance and illumination estimation";Xueyang Fu 等;《IEEE》;20161231;第2782-2790页 *
"Dual-Purpose Method for Underwater and Low-Light Image Enhancement via Image Layer Separation";CHENGGANG DAI 等;《IEEE》;20191223;第178685-178698页 *
"基于亮通道色彩补偿与融合的水下图像增强";代成刚 等;《光学学报》;20181130;第38卷(第11期);第1-10页 *
"基于变分结构引导滤波的低照度图像增强算法";董雪 等;《山东大学学报》;20200930;第55卷(第9期);第72-80页 *

Also Published As

Publication number Publication date
LU500193B1 (en) 2021-11-24
CN112734673A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN112164005B (en) Image color correction method, device, equipment and storage medium
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN114004754B (en) Scene depth completion system and method based on deep learning
CN112991371B (en) Automatic image coloring method and system based on coloring overflow constraint
CN103778900A (en) Image processing method and system
CN112819096A (en) Method for constructing fossil image classification model based on composite convolutional neural network
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN113066018A (en) Image enhancement method and related device
CN112102186A (en) Real-time enhancement method for underwater video image
CN112734673B (en) Low-illumination image enhancement method and system based on multi-expression fusion
CN116523888B (en) Pavement crack detection method, device, equipment and medium
Tan et al. A simple gray-edge automatic white balance method with FPGA implementation
CN111369435A (en) Color image depth up-sampling method and system based on self-adaptive stable model
CN115205157B (en) Image processing method and system, electronic device and storage medium
CN114022371B (en) Defogging device and defogging method based on space and channel attention residual error network
CN113591838B (en) Target detection method, device, electronic equipment and storage medium
CN112734655B (en) Low-light image enhancement method for enhancing CRM (customer relationship management) based on convolutional neural network image
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion
CN115190226B (en) Parameter adjustment method, neural network model training method and related devices
CN111935475B (en) Multi-view-based scene reconstruction method and system, server and storage medium
CN113393510B (en) Image processing method, intelligent terminal and storage medium
WO2023028866A1 (en) Image processing method and apparatus, and vehicle
Li et al. Color layers-Based progressive network for Single image dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant