LU500193B1 - Low-illumination image enhancement method and system based on multi-expression fusion - Google Patents

Low-illumination image enhancement method and system based on multi-expression fusion Download PDF

Info

Publication number
LU500193B1
LU500193B1 LU500193A LU500193A LU500193B1 LU 500193 B1 LU500193 B1 LU 500193B1 LU 500193 A LU500193 A LU 500193A LU 500193 A LU500193 A LU 500193A LU 500193 B1 LU500193 B1 LU 500193B1
Authority
LU
Luxembourg
Prior art keywords
low
illumination
image
illumination image
expression fusion
Prior art date
Application number
LU500193A
Other languages
French (fr)
Inventor
Mingxing Lin
Original Assignee
Univ Shandong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Shandong filed Critical Univ Shandong
Application granted granted Critical
Publication of LU500193B1 publication Critical patent/LU500193B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a low-illumination image enhancement method and system based on multi-expression fusion. A solution comprises: acquiring a low-illumination image to be enhanced; and inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; wherein the convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion. The solution can better fit a mathematical relationship between a brightness and an original image with only a few parameters, thereby achieving excellent enhancement effects.

Description

LOW-ILLUMINATION IMAGE ENHANCEMENT METHOD AND SYSTEM BASED 600195 MULTI-EXPRESSION FUSION Field of the Invention The present disclosure belongs to the field of image enhancement technology, and particularly relates to a low-illumination image enhancement method and system based on multi-expression fusion.
Background of the Invention The statement of this section merely provides background art information related to the present disclosure, and does not necessarily constitute the prior art.
Due to the superior performance of convolutional neural networks in image processing tasks, a large number of convolutional neural networks have been developed to enhance low-illumination images. At present, most low-illumination image enhancement convolutional neural networks are constructed based on the Retinex theory. The idea of this type of networks is to learn a brightness component from an input image and remove the brightness component from the original image to achieve low-brightness image enhancement. However, the inventor found, due to the complicated mathematical relationship between the image brightness and the original image, a network including more convolutional layers is required to fit the relationship between the brightness and the original image, but excessive convolutional layers increase the cost of network training, and also affect the efficiency of image enhancement processing. In addition, if the number of convolutional layers is too small, accurate enhancement effects cannot be achieved. Therefore, existing methods require a lot of debug in determining a reasonable number of convolutional layers to fit the relationship between the brightness and the original image.
Summary of the Invention In order to solve the above problems, the present disclosure provides a low-illumination image enhancement method and system based on multi-expression fusion, which can better fit a mathematical relationship between a brightness and an original image with only a few parameters, thereby achieving excellent enhancement effects.
According to the first aspect of the embodiments of the present disclosure, provided is a low-illumination image enhancement method based on multi-expression fusion, including: acquiring a low-illumination image to be enhanced; and inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image;
wherein the convolutional neural network obtains multi-scale features of the low-illuminkt#gR0193 image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
Further, the relationship function between the low-illumination image and the illumination is specifically expressed as follows: Z‘(x)=w Ina) + el +f" wa f wherein z“(x) is the illumination, wı, w,, ws, and w, are fusion weight maps, and f represents multi-scale features of the low-illumination image.
Further, the convolutional neural network includes a multi-scale convolutional module, a weight map acquisition module, an illumination calculation module, and an enhanced image solving module.
Further, the multi-scale convolutional module generates 4 different scales of features of the low-illumination image through 4 parallel convolutional layers, and the multi-scale features are compressed to 3 channels using two convolutional layers after two parallel convolutional layers.
Further, in order to obtain more scales of features, the multi-scale convolutional module can also generate 16 scales of features using two layers of 4 parallel convolutional layers, and the multi-scale features are compressed to 3 channels using two convolutional layers after two layers of parallel convolutional layers.
Further, the weight map acquisition module includes two convolutional layers, which output 12 channels of weight maps, and four 3-channel fusion weight maps are obtained by separating the 12 channels of weight maps.
Further, the illumination calculation module receives the output results of the multi-scale convolutional module and the weight map acquisition module, and obtains an illumination calculation result using the relationship function between the low-illumination image and the illumination.
Further, the enhanced image solving module solves the enhanced image based on the Retinex theory that the low-illumination image can be expressed as a product of an illumination and a real image.
According to a second aspect of the embodiments of the present disclosure, provided is a low-illumination image enhancement system based on multi-expression fusion, including: an image acquisition module, configured to acquire a low-illumination image to be enhanced, and an image enhancement module, configured to input the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion to output an enhate@0193 image; wherein the convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
According to a third aspect of the embodiments of the present disclosure, provided is an electronic device, including a memory, a processor, and a computer program stored and running on the memory, wherein the processor executes the program to implement the low-illumination image enhancement method based on multi-expression fusion.
According to a fourth aspect of the embodiments of the present disclosure, provided is a non-transitory computer-readable storage medium, storing a computer program thereon, wherein the program is executed by a processor to implement the low-illumination image enhancement method based on multi-expression fusion.
Compared with the prior art, the beneficial effects of the present disclosure are: Considering that an illumination can be expressed as a certain combination of multi-scale features of an original image according to prior knowledge, the solution described in the present disclosure considers the prior knowledge and innovates the application of multi-expression fusion to fit the mathematical relationship between the original image and the illumination, and the mathematical relationship between the brightness and the original image can be well fitted with only a few parameters, thereby achieving excellent enhancement effect.
The advantages of the additional aspects of the present disclosure will be partially given in the following description, and some will become obvious from the following description, or be understood through the practice of the present disclosure.
Brief Description of the Drawings The accompanying drawings constituting a part of the present disclosure are used for providing a further understanding of the present disclosure, and the schematic embodiments of the present disclosure and the descriptions thereof are used for interpreting the present disclosure, rather than constituting improper limitations to the present disclosure.
Fig. 1 is a schematic structural diagram of a low-illumination image enhancement convolutional neural network based on multi-expression fusion according to Embodiment 1 of the present disclosure; Fig. 2 is a schematic diagram of low-illumination image enhancement experiment results according to Embodiment 1 of the present disclosure.
Detailed Description of Embodiments The present disclosure will be further described below in conjunction with the drawings and embodiments.
It should be noted that the following detailed descriptions are exemplary and are intended to provide further descriptions of the present disclosure. All technical and scientific terms used herein have the same meaning as commonly understood by those of ordinary skill in the technical filed to which the present application belongs, unless otherwise indicated.
It should be noted that the terms used here are merely used for describing specific embodiments, but are not intended to limit the exemplary embodiments of the present invention. As used herein, unless otherwise clearly stated in the context, the singular form is also intended to include the plural form. In addition, it should also be understood that when the terms “include” and/or “comprise” are used in the Description, they indicate features, steps, operations, devices, components, and/or combinations thereof. The embodiments in the present disclosure and the features in the embodiments can be combined with each other without conflicts. Embodiment 1: The objective of this embodiment is to provide a low-illumination image enhancement method based on multi-expression fusion. A low-illumination image enhancement method based on multi-expression fusion includes: A low-illumination image to be enhanced is acquired; and The low-illumination image is inputted into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; The convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion. Specifically, for ease of understanding, the following describes the solution of the present disclosure in detail with reference to Fig. 1: (1) Theoretical basis According to the Retinex theory, an image can be expressed as a product of an illumination and a real scenario, as shown in formula (1). (1 Q(x) =Z (x) J (x) ) Where, x is a pixel coordinate, c represents red, green and blue channels of an image, J°(x)
is the real scenario, Q°(x) is the low-illumination image, Z°(x) is the illumination, and-Y500193 represents corresponding multiplication of matrix elements.
(2) Fitting of a relationship between the low-illumination image and the illumination At present, most low-illumination enhancement networks directly learn the illumination from 5 the original image, but the mathematical relationship between the illumination and the original image is complicated, so a large number of convolutional layers are required to better fit the relationship between the illumination and the original image. According to prior knowledge, the illumination can be expressed as a certain combination of multi-scale features of the original image. The present disclosure considers the prior knowledge and innovates the application of multi-expression fusion to fit the mathematical relationship between the original image and the illumination, as shown in formula (2). 2 Z°(x)=w, -1n(f)+w,-e" +f tw, f In the formula, wı, w,, ws, and w, are fusion weight maps. f represents multi-scale features, which are obtained by multi-scale convolution on the original image. Since the mathematical mapping form of multi-scale features to illumination is unknown, the present disclosure integrates fuses several common mathematical relations, including: logarithm, exponent, power function, and addition operation. Based on the above, formula (2) is further transformed into: ‘ 3 Z° (x) =w, In{M[Q° ()1}+w, -™¢ “1 + MIO“ (x)]" +w, -MIO°C)] ) In the formula, M[.] is a multi-scale convolutional module. After the illumination Z°(x) is obtained, an image with enhanced brightness can be obtained according to formula (1), as shown in formula (4): sta) 0° (x) 4 wy In{M[Q° (x)]} +w, - 1 + MIO“ (0)]™ +w, MIO“ (091 ) According to formula (4), a convolutional neural network is constructed to enhance the low-illumination image.
(3) Construction of a convolutional neural network Fig. 1 shows a convolutional neural network structure of the present disclosure. The convolutional neural network includes a multi-scale convolutional module, a weight map acquisition module, an illumination calculation module, and an enhanced image acquisition module. The multi-scale convolutional module generates 4 different scales of features of the low-illumination image through 4 parallel convolutional layers, and the multi-scale features are compressed to 3 channels using two convolutional layers after two parallel convolutional layers; in order to obtain more scales of features, the multi-scale convolutional module can also generate 16 scales of features using two layers of 4 parallel convolutional layers, and the multi-scale featurels'4p80193 compressed to 3 channels using two convolutional layers after two layers of parallel convolutional layers; the weight map acquisition module includes two convolutional layers, which output 12 channels of weight maps, and four 3-channel fusion weight maps are obtained by separating the 12 channels of weight maps; the illumination calculation module receives the output results of the multi-scale convolutional module and the weight map acquisition module, and obtains an illumination calculation result using the relationship function between the low-illumination image and the illumination; and the enhanced image solving module solves the enhanced image based on the Retinex theory that the low-illumination image can be expressed as a product of an illumination and areal image. (4) Network training A loss function for training the network is shown in formula (5). The loss function includes two terms: a mean square error term and a feature loss term.
The mean square error term minimizes the difference between pixels of the output image and a reference image.
The feature loss term minimizes the difference between advanced features of the output image and the reference image.
Loss =|s* (a) -G°(«)|, +2 X [TIS -TIG GN, 6 G“(x) is the reference image.
T;[.] is a feature extractor.
The present disclosure uses a VGG16 network as the feature extractor.
In order to improve the training speed, only the features of layers 2, 14 and 30 are used. ||.||, is an L2 norm, ||.||; is an LI norm, and 2 is an adjustment factor for adjusting weights of the two loss terms and is 0.01. 500 images of natural illumination are synthesized into 500 low-illumination images to train the network.
The steps for synthesizing the low-illumination images are shown in formulas (6) and (7). x (6 D‘(x)=hsv_rgb{cat| H (x),S(x),V (x)]} ) x (7 V(x)=V x)" ) In the formulas, H(x), Six), and V(x) represent H, S, and V channels of a natural illumination image. hsv reb{.} indicates that an HSV image is transformed into an RGB image, and caf[.] indicates a stacking operation. y is a random factor between 0 and 1, and each natural illumination image generates one random factor.
An Adam optimizer is used in the training process, a weight decay rate is 0.0001, an initial learning rate is 0.00005, and the learning rate decays 10 times every 30 cycles.
The training batch and training cycle are respectively 1 and 90. The network weights are initialized by Gaussian distribution. LUS00193 (5) Experimental proof The present disclosure uses images in a data set commonly used in the field of image enhancement to test the method of the present disclosure. The low-illumination image test results are shown in FIG. 2. The original images have the characteristics of low brightness, low contrast, etc. After the network enhancement proposed by the patent, the brightness and contrast of the images are improved, and the details are clear. The experimental results show that the network can effectively improve the brightness of atmospheric dark images, and it takes only 0.041 seconds to process a 600*600 image on a computer of Intel-i5 CPU, NVIDIA GTX 2080Ti GPU.
Embodiment 2: The objective of this embodiment is to provide a low-illumination image enhancement system based on multi-expression fusion.
A low-illumination image enhancement system based on multi-expression fusion includes: an image acquisition module, configured to acquire a low-illumination image to be enhanced, and an image enhancement module, configured to input the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; The convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
Embodiment 3: The objective of this embodiment is to provide an electronic device.
An electronic device includes a memory, a processor, and a computer program stored and running on the memory, wherein the processor executes the program to implement the low-illumination image enhancement method based on multi-expression fusion, including: A low-illumination image to be enhanced is acquired; and The low-illumination image is inputted into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; The convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
Embodiment 4:
The objective of this embodiment is to provide a non-transitory computer-readable stdidg@0193 medium.
A non-transitory computer-readable storage medium stores a computer program thereon, wherein the program is executed by a processor to implement the low-illumination image enhancement method based on multi-expression fusion, including: A low-illumination image to be enhanced is acquired; and The low-illumination image is inputted into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; The convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
The low-illumination image enhancement method and system based on multi-expression fusion according to the above-mentioned embodiments can be implemented, and have broad application prospects.
Described above are merely preferred embodiments of the present disclosure, and the present disclosure is not limited thereto. Various modifications and variations may be made to the present disclosure for those skilled in the art. Any modification, equivalent substitution, improvement or the like made within the spirit and principle of the present disclosure shall fall into the protection scope of the present disclosure.
Although the specific embodiments of the present disclosure are described above in combination with the accompanying drawing, the protection scope of the present disclosure is not limited thereto. It should be understood by those skilled in the art that various modifications or variations could be made by those skilled in the art based on the technical solution of the present disclosure without any creative effort, and these modifications or variations shall fall into the protection scope of the present disclosure.

Claims (10)

Claims LU500193
1. A low-illumination image enhancement method based on multi-expression fusion, comprising: acquiring a low-illumination image to be enhanced; and inputting the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; wherein the convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
2. The low-illumination image enhancement method based on multi-expression fusion according to claim 1, wherein the relationship function between the low-illumination image and the illumination is specifically expressed as follows: Z‘(x)=w -In(f)+w,-e/ +f" wa f wherein z°(x) is the illumination, wı, w,, ws, and w, are fusion weight maps, and f represents multi-scale features of the low-illumination image.
3. The low-illumination image enhancement method based on multi-expression fusion according to claim 1, wherein the convolutional neural network comprises a multi-scale convolutional module, a weight map acquisition module, an illumination calculation module, and an enhanced image solving module.
4. The low-illumination image enhancement method based on multi-expression fusion according to claim 1, wherein the multi-scale convolutional module generates 4 different scales of features of the low-illumination image through 4 parallel convolutional layers, and the multi-scale features are compressed to 3 channels using two convolutional layers after two parallel convolutional layers.
5. The low-illumination image enhancement method based on multi-expression fusion according to claim 1, wherein the weight map acquisition module comprises two convolutional layers, which output 12 channels of weight maps, and four 3-channel fusion weight maps are obtained by separating the 12 channels of weight maps.
6. The low-illumination image enhancement method based on multi-expression fusion according to claim 1, wherein the illumination calculation module receives the output results of the multi-scale convolutional module and the weight map acquisition module, and obtains an illumination calculation result using the relationship function between the low-illumination image and the illumination.
7. The low-illumination image enhancement method based on multi-expression fitd/§H0193 according to claim 1, wherein the enhanced image solving module solves the enhanced image based on the Retinex theory that the low-illumination image can be expressed as a product of an illumination and a real image.
8. A low-illumination image enhancement system based on multi-expression fusion, comprising: an image acquisition module, configured to acquire a low-illumination image to be enhanced, and an image enhancement module, configured to input the low-illumination image into a pre-trained convolutional neural network based on multi-expression fusion to output an enhanced image; wherein the convolutional neural network obtains multi-scale features of the low-illumination image through multi-scale convolution, and fits a relationship function between the low-illumination image and an illumination based on the multi-scale features by means of the multi-expression fusion.
9. An electronic device, comprising a memory, a processor, and a computer program stored and running on the memory, wherein the processor executes the program to implement the low-illumination image enhancement method based on multi-expression fusion according to any one of claims 1-7.
10. A non-transitory computer-readable storage medium, storing a computer program thereon, wherein the program is executed by a processor to implement the low-illumination image enhancement method based on multi-expression fusion according to any one of claims 1-7.
LU500193A 2021-01-13 2021-05-24 Low-illumination image enhancement method and system based on multi-expression fusion LU500193B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110044526.XA CN112734673B (en) 2021-01-13 2021-01-13 Low-illumination image enhancement method and system based on multi-expression fusion

Publications (1)

Publication Number Publication Date
LU500193B1 true LU500193B1 (en) 2021-11-24

Family

ID=75591516

Family Applications (1)

Application Number Title Priority Date Filing Date
LU500193A LU500193B1 (en) 2021-01-13 2021-05-24 Low-illumination image enhancement method and system based on multi-expression fusion

Country Status (2)

Country Link
CN (1) CN112734673B (en)
LU (1) LU500193B1 (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077512B (en) * 2012-10-18 2015-09-09 北京工业大学 Based on the feature extracting and matching method of the digital picture that major component is analysed
CN103839245B (en) * 2014-02-28 2016-06-15 北京工业大学 The Retinex colour-image reinforcing method at night of Corpus--based Method rule
CN105654438A (en) * 2015-12-27 2016-06-08 西南技术物理研究所 Gray scale image fitting enhancement method based on local histogram equalization
CN108492271B (en) * 2018-03-26 2021-08-24 中国电子科技集团公司第三十八研究所 Automatic image enhancement system and method fusing multi-scale information
CN108564549B (en) * 2018-04-20 2022-04-05 福建帝视信息科技有限公司 Image defogging method based on multi-scale dense connection network
CN110175964B (en) * 2019-05-30 2022-09-30 大连海事大学 Retinex image enhancement method based on Laplacian pyramid
CN110414571A (en) * 2019-07-05 2019-11-05 浙江网新数字技术有限公司 A kind of website based on Fusion Features reports an error screenshot classification method
CN110717497B (en) * 2019-09-06 2023-11-07 中国平安财产保险股份有限公司 Image similarity matching method, device and computer readable storage medium
CN110852964A (en) * 2019-10-30 2020-02-28 天津大学 Image bit enhancement method based on deep learning
CN111260543B (en) * 2020-01-19 2022-01-14 浙江大学 Underwater image splicing method based on multi-scale image fusion and SIFT features

Also Published As

Publication number Publication date
CN112734673A (en) 2021-04-30
CN112734673B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN109583483B (en) Target detection method and system based on convolutional neural network
CN111428575B (en) Tracking method for fuzzy target based on twin network
CN112164005B (en) Image color correction method, device, equipment and storage medium
CN107506828A (en) Computing device and method
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
US11455535B2 (en) Systems and methods for sensor-independent illuminant determination
US20230222623A1 (en) Multi-scale transformer for image analysis
US10922852B2 (en) Oil painting stroke simulation using neural network
EP4057609A1 (en) Method and apparatus for image correction
CN116309110A (en) Low-light image defogging method based on lightweight deep neural network
LU500193B1 (en) Low-illumination image enhancement method and system based on multi-expression fusion
CN111178229B (en) Deep learning-based vein imaging method and device
Kumar et al. Color image contrast enhancement using modified firefly algorithm
CN103942764B (en) A kind of two dimensional optical fiber spectrum picture based on module analysis technology repairs algorithm
Choi et al. Learning-based illuminant estimation model with a persistent memory residual network (PMRN) architecture
Li et al. An FPGA-based tree crown detection approach for remote sensing images
CN113469190B (en) Single-stage target detection algorithm based on domain adaptation
Du et al. In color constancy: data mattered more than network
CN116012688B (en) Image enhancement method for urban management evaluation system
CN109584172A (en) Backlight compensation method and device based on the fuzzy learning machine that transfinites of iteration
Bai et al. Non-local hierarchical residual network for single image super-resolution
Yang et al. Semantic Segmentation of Remote Sensing Image Based on Two-time Augmentation and Atrous Convolution
Xing Retracted: Visual Saliency Analysis of Artistic Photography based on Single-Angle Image Reconstruction Algorithm with Attention Model
CN108682041A (en) A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning
Gao et al. Biologically inspired image invariance guided illuminant estimation using shallow and deep models

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20211124