CN115482169A - Low-illumination image enhancement method and device, electronic equipment and storage medium - Google Patents

Low-illumination image enhancement method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115482169A
CN115482169A CN202211175581.3A CN202211175581A CN115482169A CN 115482169 A CN115482169 A CN 115482169A CN 202211175581 A CN202211175581 A CN 202211175581A CN 115482169 A CN115482169 A CN 115482169A
Authority
CN
China
Prior art keywords
image
low
illumination
global
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211175581.3A
Other languages
Chinese (zh)
Inventor
赵文勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Information Technology
Original Assignee
Shenzhen Institute of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Information Technology filed Critical Shenzhen Institute of Information Technology
Priority to CN202211175581.3A priority Critical patent/CN115482169A/en
Publication of CN115482169A publication Critical patent/CN115482169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing energy, and discloses a low-illumination image enhancement method which comprises the following steps: acquiring a low-illumination image, and decomposing the low-illumination image into an illumination component and a reflection component by using a trained countermeasure generation network resolver; identifying guide pixels of the reflection components, and constructing an initial enhanced image of the low-illumination image by using a generator in a countermeasure generation network according to the guide pixels; identifying a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network, and identifying a local image effect of the enhanced image by using a local identifier in the countermeasure generation network; and respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions. The invention can improve the enhancement effect of the low-illumination image.

Description

Low-illumination image enhancement method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for enhancing a low-illuminance image, an electronic device, and a storage medium.
Background
The low-illumination image enhancement method is a process of enhancing the image quality and brightness of a picture shot under a low-illumination condition to enable the picture to be more clearly in line with the perception of human eyes, and can improve the visibility of night inspection or monitoring and improve the image recognition technology of each software platform.
The method mainly comprises the steps of firstly calculating the brightness value of an image, converting an rgb color gamut of the image into a lab color gamut when the calculated brightness value is too low, carrying out defogging treatment on the image in the lab color gamut, and finally achieving the effect of image enhancement.
Disclosure of Invention
In order to solve the above problems, the present invention provides a low-illuminance image enhancement method, apparatus, electronic device, and storage medium, which can improve the enhancement effect of a low-illuminance image.
In a first aspect, the present invention provides a low-illumination image enhancement method, including:
acquiring a low-illumination image, and decomposing the low-illumination image into an illumination component and a reflection component by using a trained countermeasure generation network resolver;
identifying a guide pixel of the reflection component, and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixel;
identifying a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network, and identifying a local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
and respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
In one possible implementation manner of the first aspect, the decomposing the low-illuminance image into an illumination component and a reflection component by using a trained confrontation generation network resolver includes:
extracting image features of the low-illumination image by using the convolution layer in the decomposer;
locking image detail features with a pooling layer in the decomposer according to the image features;
decomposing the low-illumination image into the illumination component and the reflection component using an image decomposition function in the decomposer according to the image detail feature.
In one possible implementation manner of the first aspect, the identifying the guide pixel of the reflection component includes:
identifying detail pixels in the reflected component;
and judging whether the detail pixel can be used as the guide detail or not, and taking the detail pixel as the guide pixel when the detail pixel can be used as the guide detail.
In one possible implementation manner of the first aspect, the constructing an initial enhanced image of the low-illuminance image by using a generator in the countermeasure generation network according to the guide pixel includes:
identifying, according to the guide pixel, a guide detail of the guide pixel by using a pixel identification layer in the generator;
configuring a generation rule for generating an enhanced image by using a rule layer in the generator according to the guide details;
and generating an initial enhanced image by using an image generation function in the generator according to the generation rule.
In one possible implementation manner of the first aspect, the identifying, by a global identifier in the countermeasure generation network, a global image effect of the initial enhanced image includes:
acquiring a global feature map of the initial enhanced image by using a feature layer in the global discriminator;
according to the global feature map, identifying a global map dimension of the global feature map by using a dimension layer in the global discriminator;
and calculating the global image effect of the initial enhanced image by using the global feature function in the global discriminator according to the dimension of the global image.
In a possible implementation manner of the first aspect, the global feature function includes:
Figure BDA0003864176350000021
wherein L31 represents the global image effect, i, j represents the global feature map corresponding to the initial enhanced image, W i,j And H i,j Dimension, φ, representing the global feature map of the initial enhanced image i,j The dimension of the low-illumination image corresponding to the initial enhanced image is shown, G represents the initial enhanced image, and I represents the low-illumination image corresponding to the initial enhanced image.
In a possible implementation manner of the first aspect, the global feature function includes:
Figure BDA0003864176350000031
wherein L31 represents the global image effect, i, j represents the global feature map corresponding to the initial enhanced image, and W i,j And H i,j The dimension, φ, representing the global feature map corresponding to the initial enhanced image i,j The dimension of the low-illumination image corresponding to the initial enhanced image is shown, G represents the initial enhanced image, and I represents the low-illumination image corresponding to the initial enhanced image.
In a second aspect, the present invention provides a low-illuminance image enhancement apparatus, comprising:
the device comprises a low-illumination image decomposition module, a contrast generation network analysis module and a contrast analysis module, wherein the low-illumination image decomposition module is used for acquiring a low-illumination image and decomposing the low-illumination image into an illumination component and a reflection component by using a trained resolver in the contrast generation network;
the enhanced image generation module is used for identifying guide pixels of the reflection components and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixels; the enhanced image identification module is used for identifying the global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network and identifying the local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
and the enhanced image output module is used for respectively judging whether the global image effect and the local image effect meet preset conditions according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
In a third aspect, the present invention provides an electronic device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the low-illuminance image enhancement method as defined in any one of the above first aspects.
In a fourth aspect, the present invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the low-illuminance image enhancement method as described in any one of the first aspects above.
Compared with the prior art, the technical principle and the beneficial effects of the scheme are as follows:
according to the embodiment of the invention, by acquiring the low-illumination image and decomposing the low-illumination image into the illumination component and the reflection component by using the trained splitter in the countermeasure generation network, the detail image of the low-illumination image can be decomposed, so that detail guidance is provided for later generation of the enhanced image effect. Next, the embodiment of the invention can further embody the detail information carried in the reflection component by identifying the guide pixel of the reflection component, so as to better enhance the detail of the guide image and improve the image quality; next, the embodiment of the present invention may generate an image with enhanced image quality and with details preserved by constructing an initial enhanced image of the low-illuminance image by using a generator in the antagonistic generation network according to the guide pixel. Thirdly, the overall image effect and the local image effect of the initial enhanced image can be identified by utilizing the overall discriminator in the countermeasure generating network to provide effect data support for later-stage judgment of the enhanced effect, and finally, whether the overall image effect and the local image effect meet the preset conditions or not is judged according to the irradiation components, and whether the overall image effect and the local image effect meet the preset standards or not can be judged through the overall effect and the local effect respectively, so that the image with the standard image enhanced effect is screened out. Therefore, the low-illumination image enhancement method, the low-illumination image enhancement device, the electronic device and the storage medium provided by the embodiment of the invention can realize the generation of a complete and efficient low-illumination image enhancement method and improve the effect of low-illumination image enhancement.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart illustrating a method for enhancing a low-illumination image according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a step of the low-illumination image enhancement method shown in fig. 1 according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating another step of the low-illumination image enhancement method shown in fig. 1 according to an embodiment of the present invention;
fig. 4 is a block diagram of a low-illumination image enhancement apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device implementing a low-illuminance image enhancement method according to an embodiment of the present invention.
Detailed Description
It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Embodiments of the present invention provide a low-illumination image enhancement method, where an execution subject of the low-illumination image enhancement method includes but is not limited to at least one of a server, a terminal, and other electronic devices that can be configured to execute the method provided by embodiments of the present invention. In other words, the low-illuminance image enhancement method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a low-illumination image enhancement method according to an embodiment of the present invention. The low-illumination image enhancement method described in fig. 1 includes:
s1, acquiring a low-illumination image, and decomposing the low-illumination image into an illumination component and a reflection component by using a trained confrontation generation network resolver.
According to the embodiment of the invention, by acquiring the low-illumination image and decomposing the low-illumination image into the illumination component and the reflection component by using the trained splitter in the countermeasure generation network, the detailed image of the low-illumination image can be decomposed, so that detail guidance is provided for the effect of generating the enhanced image in the later stage.
The low-illumination image refers to a picture taken in a low-light environment; the illumination component refers to an illumination image component under ambient light, and the reflection component refers to a reflection image component of a target object carrying image detail information.
Referring to fig. 2, as an embodiment of the present invention, the decomposing the low-illumination image into an illumination component and a reflection component by using a trained confrontation generation network resolver includes:
s201, extracting image features of the low-illumination image by utilizing the convolution layer in the decomposer;
s202, locking image detail features by utilizing a pooling layer in the decomposer according to the image features;
s203, decomposing the low-illumination image into the illumination component and the reflection component by using an image decomposition function in the decomposer according to the image detail characteristics.
Wherein, the convolution layer is a layer used for acquiring features in the image; the image features refer to the feature attributes of the image obtained by the convolution layer, for example, the image features of the low-illumination image, such as the features of image brightness, edges, textures and colors, shooting principal components, shooting environment and the like, are extracted by the convolution layer in the decomposer; the pooling layer is a layer used for locking the picture details in the image characteristic attribute; the image detail features refer to image detail features locked in the image features through the pooling layer, such as edge sawteeth, texture shapes, expressions and actions of people, rain accumulation on the sides of roads in a rain shooting environment, rainfall amount and other detail features.
Further, in an optional embodiment of the present invention, the image decomposition function includes:
L1=L11+L12+L13
wherein L1 denotes an image decomposition function, L11 denotes a decomposition error function, L12 denotes a downsampled reconstruction error function, and L13 denotes a downsampled gradient error function.
Further, in an optional embodiment of the present invention, the decomposing the error function includes:
Figure BDA0003864176350000061
where L11 represents the decomposition error term and I (x, y) represents the low illumination image; l (x, y) represents an illumination component, and R (x, y) represents a reflection component.
Further, in an optional embodiment of the present invention, the downsampling reconstruction error function includes:
Figure BDA0003864176350000062
where L12 represents the downsampled reconstruction error term, R i (x, y) and R j (x, y) represents two sub-images obtained by down-sampling the R (x, y) reflection component, { sub-1 } and { sub-2 } respectively represent the set of two sub-images obtained by down-sampling all the reflection components.
Further, in an optional embodiment of the present invention, the downsampling gradient error function includes:
Figure BDA0003864176350000063
where L13 represents the downsampled gradient error term,
Figure BDA0003864176350000071
a gradient operator is represented.
S2, identifying guide pixels of the reflection components, and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixels.
The embodiment of the invention can further embody the detail information carried in the reflection component by identifying the guide pixel of the reflection component, better enhance the detail of the guide image and improve the image quality. The guide pixels refer to detail image pixels carried in the reflection component, such as image texture pixels, color pixels, edge pixels, and the like, which can enhance image details.
As an embodiment of the present invention, the identifying the guide pixel of the reflection component includes: identifying detail pixels in the reflected component; judging whether the detail pixel can be used as a guide detail or not; and when the detail pixel can be used as the guide detail, using the detail pixel as the guide pixel.
The detail pixels refer to detail image pixels carried in the reflection component, such as image texture pixels, color pixels, edge pixels and the like, and the guiding details refer to detail pixels capable of guiding the enhanced image to increase detail description.
Further, in an optional embodiment of the present invention, identifying detail pixels in the reflection component may be implemented by Keyence ai image recognition technology.
Further, in an optional embodiment of the present invention, determining whether the detail pixel can be used as the guidance detail may be implemented by a determination function.
Further, the embodiment of the invention can generate the image with details preserved and enhanced image quality by constructing the initial enhanced image of the low-illumination image by the generator in the countermeasure generation network according to the guide pixel. Wherein the initial enhanced image is the first enhanced image generated by the generator.
As an embodiment of the present invention, referring to fig. 3, the constructing an initial enhanced image of the low-illumination image by using the generator in the countermeasure generation network according to the guide pixel includes:
s301, according to the guide pixel, identifying the guide details of the guide pixel by using a pixel identification layer in the generator;
s302, configuring a generation rule for generating an enhanced image by using a rule layer in the generator according to the guiding details;
and S303, generating an initial enhanced image by using an image generation function in the generator according to the generation rule.
Wherein the pixel identification layer refers to a layer of detail information identifying the leading pixel; the rule layer is a layer for establishing a generation rule for generating the enhanced image by the generator, for example, rules such as the generation order of the image from top to bottom, and a time axis of detail guidance.
Further, in an optional embodiment of the present invention, the image generating function includes:
L2=L21+L22+L23+L24
wherein L2 represents an image generation function, L21 represents a gradient error function, L22 represents a smoothness error function, L23 represents an overall similarity error function, and L24 represents a local similarity error function.
Further, in an optional embodiment of the present invention, the gradient error function includes:
Figure BDA0003864176350000081
wherein, the G is i (x, y) denotes the initial enhanced image, R i (x, y) denotes a guide picture i obtained by decomposing the original picture i by the decomposer.
Further, in an optional embodiment of the present invention, the smoothness error function includes:
Figure BDA0003864176350000082
wherein, G is i (x, y) denotes the initial enhanced image, R i (x, y) denotes a guide picture i obtained by decomposing the original picture i by a decomposer, and exp denotes an exponential function with a natural constant e as a base.
Further, in an optional embodiment of the present invention, the local similarity error function includes:
Figure BDA0003864176350000083
wherein, P patch The representation is a probability distribution formed by random local area sampling in the generated picture, (D (x) f )-1) 2 Indicating local similarity.
And S3, identifying the global image effect of the initial enhanced image by using the global identifier in the countermeasure generation network, and identifying the local image effect of the enhanced image by using the local identifier in the countermeasure generation network.
According to the embodiment of the invention, the overall effect of the image can be identified by utilizing the overall discriminator in the countermeasure generating network to identify the overall image effect of the initial enhanced image, and effect data support is provided for judging the enhanced effect in the later stage. Wherein the global image effect refers to the overall image effect of the enhanced image.
As an embodiment of the present invention, the identifying the global image effect of the initial enhanced image by using the global discriminator in the countermeasure generating network includes: acquiring a global feature map of the initial enhanced image by using a feature layer in the global discriminator; according to the global feature map, identifying a global map dimension of the global feature map by using a dimension layer in the global discriminator; and calculating the global image effect of the initial enhanced image by using the global feature function in the global discriminator according to the dimension of the global image.
Wherein the global feature map refers to a global image feature of the initial enhanced image; the feature layer refers to a layer used to obtain the initial enhanced image features. The dimension layer refers to a layer used for identifying dimensions of the feature map, such as dimensions of image width, height and the like.
Further, in an optional embodiment of the present invention, the identifying the global graph dimension of the global feature graph may be performed by an image scanning technique.
Further, in an optional embodiment of the present invention, the global feature function includes:
Figure BDA0003864176350000091
wherein L31 represents the global image effect, i, j represents the global feature map corresponding to the initial enhanced image, and W i,j And H i,j The dimension, φ, representing the global feature map corresponding to the initial enhanced image i,j The dimension of the initial enhanced image corresponding to the low-illumination image is shown, G represents the initial enhanced image, and I represents the low-illumination image corresponding to the initial enhanced image.
Further, the embodiment of the invention provides effect data support for judging the enhancement effect in the later period by identifying the local effect of the local image effect identification image of the enhancement image by utilizing the local identifier in the countermeasure generation network. Wherein the local image effect refers to an effect of a part of the image of the enhanced image.
As an embodiment of the present invention, the identifying the local image effect of the initial enhanced image by using the local discriminator in the countermeasure generating network includes: acquiring an enhanced local image of the initial enhanced image by utilizing a segmentation layer in the local discriminator; identifying a corresponding initial partial graph of the partial graph using an original layer in the partial discriminator according to the enhanced partial graph; and calculating the local image effect of the initial enhanced image by using the local feature function in the local discriminator according to the enhanced local image and the initial local image.
Wherein the segmentation layer is a layer used to obtain an enhanced local image of the initial enhanced image; the original layer is a layer used for acquiring a local image corresponding to the initial enhanced image.
Further, it should be noted that the calculation principle of the local image effect of the initial enhanced image is the same as the calculation principle of the global image effect, and further details are not described herein.
And S4, respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
According to the embodiment of the invention, whether the global image effect and the local image effect meet the preset conditions or not is respectively judged according to the irradiation components, and whether the global image effect and the local image effect meet the preset standards or not can be respectively judged through the global effect and the local effect, so that the image with the standard image enhancement effect is screened out. It should be noted that the preset condition is a condition for determining whether the global image effect and the local image effect meet the image enhancement standard, and may be set based on an actual service scene, for example, brightness of the global image effect and contrast of the local image effect may be set as the preset condition.
As an embodiment of the present invention, the respectively determining whether the global image effect and the local image effect satisfy preset conditions according to the illumination components includes: respectively acquiring a global preset condition of the global image effect and a local preset condition of the local image effect according to the illumination components; configuring a judgment rule according to the global preset condition and the local preset condition; and respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the judgment rules.
The global preset condition refers to an effect standard which a global image effect needs to reach, the local preset condition refers to an effect standard which a local image effect needs to reach, and the judgment rule refers to a rule for judging whether the image effect meets a standard, for example, rule conditions such as how much the brightness of the global image effect reaches a threshold value, whether the texture is clear enough, and the like.
Further, in an optional embodiment of the present invention, according to the determination rule, determining whether the global image effect and the local image effect satisfy a preset condition may be accomplished by using a determination function.
According to the embodiment of the invention, by acquiring the low-illumination image and decomposing the low-illumination image into the illumination component and the reflection component by using the trained splitter in the countermeasure generation network, the detail image of the low-illumination image can be decomposed, so that detail guidance is provided for later generation of the enhanced image effect. Next, the embodiment of the invention can further embody the detail information carried in the reflection component by identifying the guide pixel of the reflection component, so as to better enhance the detail of the guide image and improve the image quality; next, the embodiment of the present invention may generate an image with enhanced image quality and with details preserved by constructing an initial enhanced image of the low-illuminance image by using a generator in the antagonistic generation network according to the guide pixel. Thirdly, the overall image effect and the local image effect of the initial enhanced image can be identified by utilizing the overall discriminator in the countermeasure generating network to provide effect data support for later-stage judgment of the enhanced effect, and finally, whether the overall image effect and the local image effect meet the preset conditions or not is judged according to the irradiation components, and whether the overall image effect and the local image effect meet the preset standards or not can be judged through the overall effect and the local effect respectively, so that the image with the standard image enhanced effect is screened out. Therefore, the low-illumination image enhancement method, the low-illumination image enhancement device, the electronic device and the storage medium provided by the embodiment of the invention can realize the generation of a complete and efficient low-illumination image enhancement method and improve the effect of low-illumination image enhancement.
Fig. 4 is a functional block diagram of the low-illuminance image enhancement device according to the present invention.
The low-illuminance image enhancement apparatus 400 according to the present invention may be installed in an electronic device. According to the implemented functions, the low-illumination image enhancement apparatus may include a low-illumination image decomposition module 401, an enhanced image generation module 402, an enhanced image discrimination module 403, and an enhanced image output module 404. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and can perform a fixed function, and are stored in a memory of the electronic device.
In the embodiment of the present invention, the functions of the modules/units are as follows:
the low-illumination image decomposition module 401 is configured to obtain a low-illumination image, and decompose the low-illumination image into an illumination component and a reflection component by using a trained splitter in a challenge generation network;
the enhanced image generation module 402 is configured to identify a guide pixel of the reflection component, and construct an initial enhanced image of the low-illuminance image by using a generator in the countermeasure generation network according to the guide pixel;
the enhanced image identification module 403 is configured to identify a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network, and identify a local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
the enhanced image output module 404 is configured to respectively determine whether the global image effect and the local image effect meet preset conditions according to the illumination components, and when both the global image effect and the local image effect meet the preset conditions, use the initial enhanced image as a final enhanced image of the low-illumination image.
In detail, when the modules in the low-illuminance image enhancement device 400 according to the embodiment of the present invention are used, the same technical means as the low-illuminance image enhancement method described in fig. 1 to fig. 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device implementing the low-illuminance image enhancement method according to the present invention.
The electronic device may include a processor 50, a memory 51, a communication bus 52, and a communication interface 53, and may further include a computer program, such as a low-light image enhancement program, stored in the memory 51 and executable on the processor 50.
In some embodiments, the processor 50 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 50 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., executing a low-light image enhancement program, etc.) stored in the memory 51 and calling data stored in the memory 51.
The memory 51 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 51 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 51 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device. Further, the memory 51 may also include both an internal storage unit and an external storage device of the electronic device. The memory 51 may be used to store not only application software installed in the electronic device and various types of data, such as codes of a database configuration connection program, but also temporarily store data that has been output or will be output.
The communication bus 52 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 51 and at least one processor 50 or the like.
The communication interface 53 is used for communication between the electronic device 5 and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 5 shows only an electronic device with components, and those skilled in the art will appreciate that the structure shown in fig. 5 does not constitute a limitation of the electronic device, and may include fewer or more components than shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 50 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It should be understood that the embodiments are illustrative only and that the scope of the invention is not limited to this structure.
The database configuration connection program stored in the memory 51 of the electronic device is a combination of computer programs, and when running in the processor 50, can realize:
acquiring a low-illumination image, and decomposing the low-illumination image into an illumination component and a reflection component by using a trained countermeasure generation network resolver;
identifying a guide pixel of the reflection component, and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixel;
identifying a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network, and identifying a local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
and respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
Specifically, the processor 50 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium. The storage medium may be volatile or nonvolatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
The present invention also provides a storage medium, which is readable and stores a computer program that, when executed by a processor of an electronic device, can implement:
acquiring a low-illumination image, and decomposing the low-illumination image into an illumination component and a reflection component by using a trained countermeasure generation network resolver;
identifying a guide pixel of the reflection component, and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixel;
identifying a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network, and identifying a local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
and respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is merely illustrative of particular embodiments of the invention that enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A low-illumination image enhancement method, characterized in that the method comprises:
acquiring a low-illumination image, and decomposing the low-illumination image into an illumination component and a reflection component by using a trained countermeasure generation network resolver;
identifying a guide pixel of the reflection component, and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixel;
identifying a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network, and identifying a local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
and respectively judging whether the global image effect and the local image effect meet preset conditions or not according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
2. The method of claim 1, wherein decomposing the low-illumination image into an illumination component and a reflection component using a trained confrontation generation network resolver comprises:
extracting image features of the low-illumination image by using the convolution layer in the decomposer;
locking image detail features with a pooling layer in the decomposer according to the image features;
decomposing the low-illumination image into the illumination component and the reflection component using an image decomposition function in the decomposer according to the image detail feature.
3. The method of claim 1, wherein the identifying the leading pixel of the reflected component comprises:
identifying detail pixels in the reflected component;
and judging whether the detail pixel can be used as the guide detail or not, and taking the detail pixel as the guide pixel when the detail pixel can be used as the guide detail.
4. The method of claim 1, wherein constructing an initial enhanced image of the low-light image with a generator in the countermeasure generation network according to the guide pixel comprises:
identifying, according to the guide pixel, a guide detail of the guide pixel by using a pixel identification layer in the generator;
configuring a generation rule for generating an enhanced image by using a rule layer in the generator according to the guiding details;
and generating an initial enhanced image by using an image generation function in the generator according to the generation rule.
5. The method of claim 1, wherein the identifying the global image effect of the initial enhanced image by using the global identifier in the countermeasure generation network comprises:
acquiring a global feature map of the initial enhanced image by using a feature layer in the global discriminator;
according to the global feature map, identifying a global map dimension of the global feature map by using a dimension layer in the global discriminator;
and calculating the global image effect of the initial enhanced image by using the global feature function in the global discriminator according to the dimension of the global image.
6. The method of claim 5, wherein the global feature function comprises:
Figure FDA0003864176340000021
wherein L31 represents the global image effect, i, j represents the global feature map corresponding to the initial enhanced image, and W i,j And H i,j The dimension, φ, representing the global feature map corresponding to the initial enhanced image i,j The dimension of the low-illumination image corresponding to the initial enhanced image is shown, G represents the initial enhanced image, and I represents the low-illumination image corresponding to the initial enhanced image.
7. The method of claim 1, wherein identifying the local image effect of the initial enhanced image by using a local identifier in the challenge-generating network comprises:
acquiring an enhanced local map of the initial enhanced image by utilizing a segmentation layer in the local discriminator;
identifying a corresponding initial partial graph of the partial graph by utilizing an original layer in the partial discriminator according to the enhanced partial graph;
and calculating the local image effect of the initial enhanced image by using the local feature function in the local discriminator according to the enhanced local image and the initial local image.
8. An apparatus for low-illumination image enhancement, the apparatus comprising:
the device comprises a low-illumination image decomposition module, a contrast generation network analysis module and a contrast analysis module, wherein the low-illumination image decomposition module is used for acquiring a low-illumination image and decomposing the low-illumination image into an illumination component and a reflection component by utilizing a trained splitter in the contrast generation network;
the enhanced image generation module is used for identifying guide pixels of the reflection components and constructing an initial enhanced image of the low-illumination image by using a generator in the countermeasure generation network according to the guide pixels; an enhanced image identification module for identifying a global image effect of the initial enhanced image by using a global identifier in the countermeasure generation network and identifying a local image effect of the enhanced image by using a local identifier in the countermeasure generation network;
and the enhanced image output module is used for respectively judging whether the global image effect and the local image effect meet preset conditions according to the illumination components, and taking the initial enhanced image as a final enhanced image of the low-illumination image when the global image effect and the local image effect both meet the preset conditions.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the low-illuminance image enhancement method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program which, when executed by a processor, implements a low-illuminance image enhancement method according to any one of claims 1 to 7.
CN202211175581.3A 2022-09-26 2022-09-26 Low-illumination image enhancement method and device, electronic equipment and storage medium Pending CN115482169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211175581.3A CN115482169A (en) 2022-09-26 2022-09-26 Low-illumination image enhancement method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211175581.3A CN115482169A (en) 2022-09-26 2022-09-26 Low-illumination image enhancement method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115482169A true CN115482169A (en) 2022-12-16

Family

ID=84395084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211175581.3A Pending CN115482169A (en) 2022-09-26 2022-09-26 Low-illumination image enhancement method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115482169A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109501A (en) * 2022-12-19 2023-05-12 深圳信息职业技术学院 Low-illumination image sequence enhancement method, device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109501A (en) * 2022-12-19 2023-05-12 深圳信息职业技术学院 Low-illumination image sequence enhancement method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN112507934A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111639704A (en) Target identification method, device and computer readable storage medium
CN112137591B (en) Target object position detection method, device, equipment and medium based on video stream
CN113920117B (en) Panel defect area detection method and device, electronic equipment and storage medium
CN108648189A (en) Image fuzzy detection method, apparatus, computing device and readable storage medium storing program for executing
CN112507923A (en) Certificate copying detection method and device, electronic equipment and medium
CN115294483A (en) Small target identification method and system for complex scene of power transmission line
CN115482169A (en) Low-illumination image enhancement method and device, electronic equipment and storage medium
CN113887439A (en) Automatic early warning method, device, equipment and storage medium based on image recognition
CN117455762A (en) Method and system for improving resolution of recorded picture based on panoramic automobile data recorder
CN113610934B (en) Image brightness adjustment method, device, equipment and storage medium
CN115908175A (en) Low-illumination image multi-level enhancement method and device, electronic equipment and storage medium
CN115760854A (en) Deep learning-based power equipment defect detection method and device and electronic equipment
CN114267064A (en) Face recognition method and device, electronic equipment and storage medium
CN113190703A (en) Intelligent retrieval method and device for video image, electronic equipment and storage medium
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN115937145B (en) Skin health visualization method, device and equipment based on big data analysis
CN113869385A (en) Poster comparison method, device and equipment based on target detection and storage medium
CN113391779A (en) Parameter adjusting method, device and equipment for paper-like screen
CN114612437B (en) AMOLED-based display image quality improvement method
CN112905817A (en) Image retrieval method and device based on sorting algorithm and related equipment
CN116563445B (en) Cartoon scene rendering method and device based on virtual reality
CN115100081B (en) LCD display screen gray scale image enhancement method, device, equipment and storage medium
CN114359645B (en) Image expansion method, device, equipment and storage medium based on characteristic area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination