CN112862713A - Attention mechanism-based low-light image enhancement method and system - Google Patents

Attention mechanism-based low-light image enhancement method and system Download PDF

Info

Publication number
CN112862713A
CN112862713A CN202110142918.XA CN202110142918A CN112862713A CN 112862713 A CN112862713 A CN 112862713A CN 202110142918 A CN202110142918 A CN 202110142918A CN 112862713 A CN112862713 A CN 112862713A
Authority
CN
China
Prior art keywords
convolution
attention mechanism
layer
light image
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110142918.XA
Other languages
Chinese (zh)
Other versions
CN112862713B (en
Inventor
吕晨
盛星
孟琛
庄云亮
康春萌
吕蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Bio Newvision Medical Equipment Ltd
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202110142918.XA priority Critical patent/CN112862713B/en
Publication of CN112862713A publication Critical patent/CN112862713A/en
Application granted granted Critical
Publication of CN112862713B publication Critical patent/CN112862713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a low-light image enhancement method and a system based on an attention mechanism, which comprise the following steps: decomposing a low-light image to be enhanced by adopting a retina decomposition network to obtain an illumination component and a reflection component, wherein the retina decomposition network comprises four types of convolution layers with different convolution kernel sizes which are sequentially connected, and MSRF (minimum mean Square) attention mechanism operation is additionally arranged behind each type of convolution layer; and training a pre-constructed fusion enhancement network by adopting a focus loss function, and obtaining an enhanced picture by adopting the trained fusion network for the illumination component and the reflection component. The retina decomposition network RDNet for decomposition and the fusion enhancement network FENet for fusion are provided, an attention mechanism based on multi-scale receptive field MSRF is introduced, and an omega focus loss function integrating a scale factor into a focus loss function is designed, so that the problem of sample imbalance is solved, and the effect of low-light image enhancement is improved.

Description

Attention mechanism-based low-light image enhancement method and system
Technical Field
The invention relates to the technical field of target detection under a complex background, in particular to a low-light image enhancement method and system based on an attention mechanism.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Not only do photographs taken under low light conditions create an unpleasant experience for the user, but they also degrade the performance of other computer vision tasks, such as object detection and human re-identification, because most solutions to these tasks are designed for well-exposed images, and therefore, a method is needed that can effectively improve the quality of low-light images.
Conventional single low-light image enhancement methods include histogram-based methods, defogging-based methods, and retina-based methods. The histogram-based approach reassigns the histogram to a uniform distribution and adjusts the Gamma curve index. The defogging based method utilizes the similarity between the anti-image low-light enhancement and defogging. Retina-based methods typically decompose a low-light image into illumination and reflectance components, which can reconstruct better enhancement results.
However, most retinal-based methods assume that the reflectance component remains unchanged during enhancement, regardless of color distortion and missing details; the enhancement result of W/O decomposition in the weak light enhancement method based on simple learning is limited; SID can only improve the original image, and can not be used as post-processing of the ordinary sRGB image; ICE decomposes low-light images into smooth and texture components, regardless of noise; although Retinex decomposition is also learned, Retinex decomposition mainly processes the decomposed illumination image, and only BM3D is used to denoise the decomposed reflectivity, so that a good enhancement result cannot be obtained.
Disclosure of Invention
In order to solve the problems, the invention provides a low-light image enhancement method and system based on an attention mechanism, provides a retina decomposition network RDNet for decomposition and a fusion enhancement network FENet for fusion, introduces the attention mechanism based on a multi-scale receptive field MSRF, and designs an omega focus loss function integrating a scale factor into a focus loss function so as to eliminate the problem of sample imbalance and improve the effect of low-light image enhancement.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a low-light image enhancement method based on an attention mechanism, including:
decomposing a low-light image to be enhanced by adopting a retina decomposition network to obtain an illumination component and a reflection component, wherein the retina decomposition network comprises four types of convolution layers with different convolution kernel sizes which are sequentially connected, and MSRF (minimum mean Square) attention mechanism operation is additionally arranged behind each type of convolution layer;
and training a pre-constructed fusion enhancement network by adopting a focus loss function, and obtaining an enhanced picture by adopting the trained fusion network for the illumination component and the reflection component.
In a second aspect, the present invention provides an attention-based low-light image enhancement system, comprising:
the retina decomposition network comprises sequentially connected convolution layers with different convolution kernel sizes, and MSRF attention mechanism operation is additionally arranged behind each convolution layer;
and the enhancement module is configured to train the pre-constructed fusion enhancement network by adopting the focus loss function, and obtain an enhanced image by adopting the trained fusion enhancement network for the illumination component and the reflection component.
In a third aspect, the present invention provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein when the computer instructions are executed by the processor, the method of the first aspect is performed.
In a fourth aspect, the present invention provides a computer readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a retina decomposition network RDNet for decomposition and a fusion enhancement network FENet for fusion, which introduce an attention mechanism based on MSRF, generate an image feature of each position weighted by an attention mechanism based on MSRF, and generate a semantically strong feature map for a positive object by highlighting information of a front target in the feature map; an end-to-end detection frame is provided, the enhancement effect of the low-light photo is improved, and the color and the texture of the photo are better recovered.
In order to solve the problem of unbalanced sample distribution, the invention designs a plurality of loss functions, and provides an omega focus loss function integrating a proportional factor into focus loss, wherein the proportional factor can automatically reduce the classes with a large number of objects in the training process, and allocate more attention to the classes with a small number of objects, thereby obviously improving the detection accuracy of the classes with fewer objects.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is an exploded structural view of a retina decomposition network RDNet according to embodiment 1 of the present invention;
FIG. 2 is a block diagram of an attention mechanism based on MSRF according to embodiment 1 of the present invention;
fig. 3 is a structural diagram of a fusion enhanced network FENet according to embodiment 1 of the present invention.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example 1
The embodiment provides a low-light image enhancement method based on an attention mechanism, which comprises the following steps:
s1: decomposing a low-light image to be enhanced by adopting a retina decomposition network to obtain an illumination component and a reflection component, wherein the retina decomposition network comprises four types of convolution layers with different convolution kernel sizes which are sequentially connected, and MSRF (minimum mean Square) attention mechanism operation is additionally arranged behind each type of convolution layer;
s2: and training a pre-constructed fusion enhancement network by adopting a focus loss function, and obtaining an enhanced picture by adopting the trained fusion network for the illumination component and the reflection component.
In step S1, the embodiment decomposes the low-light image by using the retina decomposition network RDNet and the attention mechanism based on the multi-scale receptive field MSRF to obtain the illumination component and the reflection component, as shown in fig. 1, specifically includes:
s1-1: stacking the maximum value of each pixel point of an RGB channel in a low-light image to be enhanced to obtain a one-dimensional tensor containing a 4-channel matrix, and connecting the one-dimensional tensor with the original low-light image to obtain an estimated illumination map of a four-dimensional tensor;
s1-2: the convolution layers with the four types of different convolution kernel sizes comprise 32 convolution kernels, a first convolution layer with the convolution kernel size of 3 x 3, a second convolution layer with the convolution kernel size of 3 x 3, a third convolution layer with the convolution kernel size of 3 x 3 and 128 convolution kernels, and a fourth convolution layer with the convolution kernel size of 1 x 1 and 1 convolution kernel; and each convolutional layer comprises a PReLU activation function;
s1-3: and after the estimated illumination map is subjected to convolution operation of four convolutional layers in sequence, the obtained front three channels are used as reflection components and the last channel is used as an illumination component through a sigmoid function, and image decomposition is completed.
Preferably, in the step S1-2:
s1-2-1: performing convolution operation twice on a first convolution layer of which the estimated illuminance map adopts 32 convolution kernels, the convolution kernel size is 3 x 3 and the activation function is PReLU, and performing MSRF attention mechanism operation once on the convolution result of the first convolution layer;
s1-2-2: performing convolution operation twice on a second convolution layer with 64 convolution kernels, the size of the convolution kernel being 3 x 3 and the activation function being PReLU on the convolution result of the first convolution layer and the operation result of the first MSRF attention mechanism, and performing MSRF attention mechanism operation once on the convolution result of the second convolution layer;
s1-2-3: performing four times of convolution operations on a third convolution layer with 128 convolution kernels, the size of the convolution kernel being 3 x 3 and the activation function being PReLU on the convolution result of the second convolution layer and the operation result of the second MSRF attention mechanism, and performing one time of MSRF attention mechanism operation on the convolution result of the third convolution layer;
s1-2-4: and performing convolution operation on the convolution result of the third convolution layer and the fourth convolution layer of which the operation result of the third MSRF attention mechanism adopts 1 convolution kernel, the size of the convolution kernel is 1 x 1 and the activation function is PReLU once, and performing sigmoid function on the convolution result of the fourth convolution layer to complete image decomposition.
Preferably, the MSRF attention mechanism operation is as shown in fig. 2, and specifically includes:
and performing convolution operation of 1 x 1 convolution kernel of the ReLU activation function twice on the convolution result of the previous convolution layer, namely the input tensor, then adding the input tensor and the processed tensor after the convolution operation of the 1 x 1 convolution kernel, finally adopting the ReLU function again, repeating the process, and obtaining different weights of different positions in the image after the convolution operation of the 1 x 1 convolution kernel of the sigmoid activation function twice.
The attention map generated based on the MSRF attention mechanism of the present embodiment weights the importance of each location of the image feature to highlight the information of the front object in the feature map; after the MSRF attention mechanism is operated, a probability value smaller than 1, namely a weighted value, is obtained after sigmoid, and different positions are weighted by different probability values after an input image enters a network.
In step S2, the embodiment proposes a new loss function to train the fusion enhanced network FENet, so as to solve the problem of unbalanced sample distribution;
specifically, a scale factor is integrated into the focus loss, called ω focus loss, and the scale factor can automatically reduce the categories with a large number of objects and allocate more attention to the categories with a small number of objects during the training process; the ω focal loss function is:
Figure BDA0002929939850000061
wherein, WiThe weight vector represents the proportion of positive labels in all object classes in the training set, beta is a hyper-parameter set through cross validation, and specifically comprises the following steps:
Figure BDA0002929939850000071
Figure BDA0002929939850000072
Lω-CE=-ωilog(pi);
the loss value of the final loss function is: l isω-focal=-(ωii·(1-pi)γ)log(pi)。
In this embodiment, the decomposed illumination component and reflection component are input into the trained fusion enhancement network fent to obtain an enhanced picture, as shown in fig. 3, specifically including:
s2-1: the fusion enhancement network FENet comprises 32 convolution kernels, a first convolution layer with convolution kernel size of 3 x 3, a second convolution layer with convolution kernel size of 3 x 3, a third convolution layer with 128 convolution kernels and convolution kernel size of 3 x 3, a convolution connection layer with 32 convolution kernels and convolution kernel size of 3 x 3, and a fourth convolution layer with 64 convolution kernels and convolution kernel size of 3 x 3; and each convolutional layer comprises a PReLU activation function;
s2-2: performing convolution operation twice on the illumination component and the reflection component by adopting a first convolution layer, performing convolution operation twice by adopting a second convolution layer, performing convolution operation four times by adopting a third convolution layer, performing convolution connection operation once by adopting a convolution connection layer, performing convolution operation twice by adopting a fourth convolution layer, and performing MSRF attention mechanism operation once after the convolution operation of each convolution layer.
Preferably, in the step S2-2:
s2-2-1: performing convolution operation twice on a first convolution layer which adopts 32 convolution kernels for illumination components and reflection components, has the convolution kernel size of 3 x 3 and has an activation function of PReLU, and performing MSRF attention mechanism operation once on the convolution result of the first convolution layer;
s2-2-2: performing convolution operation twice on a second convolution layer with 64 convolution kernels, convolution kernel size of 3 x 3 and activation function of PReLU on the convolution result of the first convolution layer and the result of the first MSRF attention mechanism operation, and performing MSRF attention mechanism operation once on the convolution result of the second convolution layer
S2-2-3: performing four times of convolution operations on a third convolution layer with 128 convolution kernels, the size of the convolution kernel being 3 x 3 and the activation function being PReLU on the convolution result of the second convolution layer and the operation result of the second MSRF attention mechanism, and performing one time of MSRF attention mechanism operation on the convolution result of the third convolution layer;
s2-2-4: performing primary connection operation on the convolution result of the third convolution layer and the convolution connection layer of which the operation result of the third MSRF attention mechanism adopts 32 convolution kernels, the size of the convolution kernels is 3 x 3 and the activation function is PReLU, and connecting the result after convolution in S2-2-1 with the result after convolution in S2-2-3;
s2-2-5: performing convolution operation twice on the connection result of the convolution connection layer by adopting 64 convolution kernels, the size of the convolution kernels is 3 x 3, and the activation function is PReLU;
s2-2-6: and finally, passing the convolution result of the fourth convolution layer through a sigmoid function.
Example 2
The embodiment provides a low-light image enhancement system based on an attention mechanism, which comprises:
the retina decomposition network comprises sequentially connected convolution layers with different convolution kernel sizes, and MSRF attention mechanism operation is additionally arranged behind each convolution layer;
and the enhancement module is configured to train the pre-constructed fusion enhancement network by adopting the focus loss function, and obtain an enhanced image by adopting the trained fusion enhancement network for the illumination component and the reflection component.
It should be noted that the modules correspond to the steps described in embodiment 1, and the modules are the same as the corresponding steps in the implementation examples and application scenarios, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In further embodiments, there is also provided:
an electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of embodiment 1. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method described in embodiment 1.
The method in embodiment 1 may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A low-light image enhancement method based on an attention mechanism is characterized by comprising the following steps:
decomposing a low-light image to be enhanced by adopting a retina decomposition network to obtain an illumination component and a reflection component, wherein the retina decomposition network comprises four types of convolution layers with different convolution kernel sizes which are sequentially connected, and MSRF (minimum mean Square) attention mechanism operation is additionally arranged behind each type of convolution layer;
and training a pre-constructed fusion enhancement network by adopting a focus loss function, and obtaining an enhanced picture by adopting the trained fusion network for the illumination component and the reflection component.
2. The attention mechanism-based low-light image enhancement method is characterized in that the maximum value of each pixel point of an RGB channel in a low-light image to be enhanced is stacked to obtain a one-dimensional tensor containing a 4-channel matrix, and the estimated illuminance map of a four-dimensional tensor is obtained after the one-dimensional tensor is connected with an original low-light image; and (3) after the estimated illumination map is subjected to convolution operation of four convolutional layers in sequence, the front three channels are used as reflection components through a sigmoid function, the last channel is used as an illumination component, and the decomposition of the low-light image is completed.
3. The low-light image enhancement method based on the attention mechanism as claimed in claim 1, wherein the convolution layers with four different convolution kernel sizes comprise: 32 convolution kernels, a first convolution layer with a convolution kernel size of 3 x 3, a second convolution layer with a convolution kernel size of 3 x 3, a third convolution layer with a convolution kernel size of 3 x 3, and a fourth convolution layer with a convolution kernel size of 1 x 1, wherein the convolution kernels are 64 convolution kernels; and each convolutional layer includes a PReLU activation function.
4. A low-light image enhancement method based on attention mechanism according to claim 3, characterized in that said decomposition comprises: performing convolution operation twice by adopting the first convolution layer, and performing MSRF attention mechanism operation on the convolution result of the first convolution layer once;
performing convolution operation twice on the convolution result of the first convolution layer and the convolution result of the first MSRF attention mechanism by adopting a second convolution layer, and performing MSRF attention mechanism operation once on the convolution result of the second convolution layer;
performing convolution operation on the convolution result of the second convolution layer and the convolution result of the second MSRF attention mechanism by adopting a third convolution layer for four times, and performing MSRF attention mechanism operation on the convolution result of the third convolution layer for one time;
and performing convolution operation on the convolution result of the third convolution layer and the convolution result of the third MSRF attention mechanism by adopting a fourth convolution layer, and decomposing the convolution result of the fourth convolution layer by a sigmoid function.
5. The low-light image enhancement method based on attention mechanism of claim 4, wherein the MSRF attention mechanism operation comprises: and performing convolution operation of 1 × 1 convolution kernel of the ReLU activation function twice on the convolution result of the previous convolution layer, namely the input tensor, then adding the input tensor and the processed tensor after the convolution operation of the 1 × 1 convolution kernel, finally adopting the ReLU function again, repeating the process, and then performing convolution operation of the 1 × 1 convolution kernel of the sigmoid activation function twice to obtain different weights of different positions in the low-light image.
6. The low-light image enhancement method based on the attention mechanism as claimed in claim 1, wherein the proportional factors of the positive labels in all the object classes in the training set are integrated into a focus loss function to obtain an ω focus loss function, and the pre-constructed fusion enhancement network is trained according to the ω focus loss function.
7. The low-light image enhancement method based on the attention mechanism is characterized in that the fusion enhancement network comprises 32 convolution kernels, a first convolution layer with the convolution kernel size of 3 x 3, a second convolution layer with the convolution kernel size of 3 x 3, a third convolution layer with the convolution kernel size of 3 x 3, 128 convolution kernels and a convolution connection layer with the convolution kernel size of 3 x 3, and a fourth convolution layer with the convolution kernel size of 3 x 3, wherein the convolution kernels are 32 convolution kernels and the convolution connection layer with the convolution kernel size of 3 x 3; and each convolutional layer includes a PReLU activation function.
8. An attention-based low-light image enhancement system comprising:
the retina decomposition network comprises sequentially connected convolution layers with different convolution kernel sizes, and MSRF attention mechanism operation is additionally arranged behind each convolution layer;
and the enhancement module is configured to train the pre-constructed fusion enhancement network by adopting the focus loss function, and obtain an enhanced image by adopting the trained fusion enhancement network for the illumination component and the reflection component.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 7.
CN202110142918.XA 2021-02-02 2021-02-02 Attention mechanism-based low-light image enhancement method and system Active CN112862713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110142918.XA CN112862713B (en) 2021-02-02 2021-02-02 Attention mechanism-based low-light image enhancement method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110142918.XA CN112862713B (en) 2021-02-02 2021-02-02 Attention mechanism-based low-light image enhancement method and system

Publications (2)

Publication Number Publication Date
CN112862713A true CN112862713A (en) 2021-05-28
CN112862713B CN112862713B (en) 2022-08-09

Family

ID=75987665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110142918.XA Active CN112862713B (en) 2021-02-02 2021-02-02 Attention mechanism-based low-light image enhancement method and system

Country Status (1)

Country Link
CN (1) CN112862713B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998120A (en) * 2022-05-17 2022-09-02 深圳小湃科技有限公司 Dim light image optimization training method, intelligent terminal and computer readable storage medium
CN117011194A (en) * 2023-10-07 2023-11-07 暨南大学 Low-light image enhancement method based on multi-scale dual-channel attention network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570381A (en) * 2019-09-17 2019-12-13 合肥工业大学 semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation
CN111932471A (en) * 2020-07-24 2020-11-13 山西大学 Double-path exposure degree fusion network model and method for low-illumination image enhancement
CN112069983A (en) * 2020-09-03 2020-12-11 武汉工程大学 Low-illumination pedestrian detection method and system for multi-task feature fusion shared learning
CN112131975A (en) * 2020-09-08 2020-12-25 东南大学 Face illumination processing method based on Retinex decomposition and generation of confrontation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570381A (en) * 2019-09-17 2019-12-13 合肥工业大学 semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation
CN111932471A (en) * 2020-07-24 2020-11-13 山西大学 Double-path exposure degree fusion network model and method for low-illumination image enhancement
CN112069983A (en) * 2020-09-03 2020-12-11 武汉工程大学 Low-illumination pedestrian detection method and system for multi-task feature fusion shared learning
CN112131975A (en) * 2020-09-08 2020-12-25 东南大学 Face illumination processing method based on Retinex decomposition and generation of confrontation network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JASON_FNG: "代码解读-Retinex低光照图像增强(Deep Retinex Decomposition for Low-Light Enhancement)", 《CSDN》 *
JUNYI WANG ET.AL: "RDGAN: RETINEX DECOMPOSITION BASED ADVERSARIAL LEARNING FOR LOW-LIGHT ENHANCEMENT", 《2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》 *
XINGS1992: "RDGAN Retinex Decomposition Based Adversarial Learning for Low-Light Enhancement(论文阅读笔记)", 《CSDN》 *
ZHONG JI ET.AL: "Small and Dense Commodity Object Detection with Multi-Scale Receptive Field Attention", 《MM 19:PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998120A (en) * 2022-05-17 2022-09-02 深圳小湃科技有限公司 Dim light image optimization training method, intelligent terminal and computer readable storage medium
CN114998120B (en) * 2022-05-17 2024-01-12 深圳小湃科技有限公司 Dim light image optimization training method, intelligent terminal and computer readable storage medium
CN117011194A (en) * 2023-10-07 2023-11-07 暨南大学 Low-light image enhancement method based on multi-scale dual-channel attention network
CN117011194B (en) * 2023-10-07 2024-01-30 暨南大学 Low-light image enhancement method based on multi-scale dual-channel attention network

Also Published As

Publication number Publication date
CN112862713B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
EP3937481A1 (en) Image display method and device
KR102442844B1 (en) Method for Distinguishing a Real Three-Dimensional Object from a Two-Dimensional Spoof of the Real Object
JP2018195293A5 (en)
CN112862713B (en) Attention mechanism-based low-light image enhancement method and system
KR20180065889A (en) Method and apparatus for detecting target
CN111402146A (en) Image processing method and image processing apparatus
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
Montulet et al. Deep learning for robust end-to-end tone mapping
CN110929805A (en) Neural network training method, target detection device, circuit and medium
CN113379613A (en) Image denoising system and method using deep convolutional network
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN113052768B (en) Method, terminal and computer readable storage medium for processing image
WO2023125750A1 (en) Image denoising method and apparatus, and storage medium
CN115526803A (en) Non-uniform illumination image enhancement method, system, storage medium and device
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
Zheng et al. Windowing decomposition convolutional neural network for image enhancement
CN109801224A (en) A kind of image processing method, device, server and storage medium
JPWO2019023376A5 (en)
JP7146461B2 (en) Image processing method, image processing device, imaging device, program, and storage medium
CN116433518A (en) Fire image smoke removing method based on improved Cycle-Dehaze neural network
GB2577732A (en) Processing data in a convolutional neural network
US11823361B2 (en) Image processing
CN113205464B (en) Image deblurring model generation method, image deblurring method and electronic equipment
CN112581401B (en) RAW picture acquisition method and device and electronic equipment
CN111383299B (en) Image processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231207

Address after: 400000, 2nd Floor, No. 27-5 Fengsheng Road, Jinfeng Town, Chongqing High tech Zone, Jiulongpo District, Chongqing

Patentee after: CHONGQING BIO NEWVISION MEDICAL EQUIPMENT Ltd.

Address before: 250014 No. 88, Wenhua East Road, Lixia District, Shandong, Ji'nan

Patentee before: SHANDONG NORMAL University

TR01 Transfer of patent right