CN117437160A - Method and device for enhancing low-light-level image based on space and frequency domain - Google Patents

Method and device for enhancing low-light-level image based on space and frequency domain Download PDF

Info

Publication number
CN117437160A
CN117437160A CN202311443314.4A CN202311443314A CN117437160A CN 117437160 A CN117437160 A CN 117437160A CN 202311443314 A CN202311443314 A CN 202311443314A CN 117437160 A CN117437160 A CN 117437160A
Authority
CN
China
Prior art keywords
image
information
layer component
channel layer
texture information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311443314.4A
Other languages
Chinese (zh)
Inventor
秦翰林
于跃
王广豪
张栩培
李静静
魏莉莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202311443314.4A priority Critical patent/CN117437160A/en
Publication of CN117437160A publication Critical patent/CN117437160A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for enhancing a low-light-level image based on space and frequency domains, wherein the method comprises the following steps: decomposing an RGB three-channel image to be processed into a three-channel layer component comprising texture information and a one-channel layer component comprising brightness information through a preset DEC network; restoring and denoising image texture information of the degraded image layer component through a preset RES network to obtain a processed three-channel image layer component comprising the texture information; obtaining a first feature map according to image texture information in three-channel layer components including texture information after the one-channel layer components including brightness information are subjected to enhancement processing; restoring the image brightness of a channel layer component comprising brightness information through a preset ILL network to obtain a second feature map; and combining the first feature map and the second feature map to obtain an RGB three-channel image with enhanced image texture information and brightness information. The invention can enhance the texture details of the image.

Description

Method and device for enhancing low-light-level image based on space and frequency domain
Technical Field
The invention belongs to the field of digital image processing, and particularly relates to a low-light-level image enhancement method and device based on space and frequency domains.
Background
The low-light enhancement refers to enhancing the image quality of an image photographed under low light conditions so that the image becomes clear or approximates to the visual effect. The low-light enhancement has important application value in the fields of night monitoring, unmanned driving, medical diagnosis and the like. In low light conditions, low light enhancement is a key technology for object/face detection and recognition.
Many photographs are currently taken under undesirable lighting conditions due to unavoidable environmental and technical constraints, including inadequate lighting conditions in the environment or incorrect placement of objects under extreme backlighting. The aesthetic quality of such low light photographs is compromised and information transfer is not satisfactory. The former can affect the viewer's experience, while the latter can lead to conveying false information such as inaccurate object/face detection and recognition. In addition, although the deep neural network exhibits impressive performance in terms of image enhancement and restoration, image enhancement in a low-light environment needs to take into consideration the problem that a low-light image has image degradation in the acquisition process, and corresponding texture enhancement, denoising and the like need to be performed while brightness is enhanced.
In recent years, a great number of researchers at home and abroad have achieved a certain result in the fields of low-light-level image enhancement and application research, and the existing low-light-level image enhancement methods can be divided into two types: one type is to fuse a low-light image with a normal image, and such methods generally use a pixel-level fusion method, such as a weighted average method, a layer-by-layer splicing method, and the like, so as to improve the brightness and contrast of the image while maintaining the detail information of the image; the other is to directly enhance the low-light-level image, and such methods generally construct different types of neural networks, such as convolutional neural networks, countermeasure networks, and the like, so as to directly process the low-light-level image to improve the quality of the image. However, the fused pictures require strict registration, otherwise the fused images would be blurred, ghost, and of poor quality.
Therefore, a method for enhancing a low-light image is needed to overcome the problems of blurred image, ghost image, poor quality and the like.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method and a device for enhancing a low-light-level image based on space and frequency domains. The technical problems to be solved by the invention are realized by the following technical scheme:
in a first aspect, the present invention provides a method for enhancing a low-light level image based on spatial and frequency domains, comprising:
decomposing an RGB three-channel image to be processed into a three-channel layer component comprising texture information and a one-channel layer component comprising brightness information through a preset DEC network; the RGB three-channel image to be processed comprises a low-light image and a normal light image;
restoring and denoising image texture information of degraded layer components in three-channel layer components comprising texture information through a preset RES network to obtain processed three-channel layer components comprising texture information; obtaining a first feature map according to image texture information in three-channel layer components including texture information after the one-channel layer components including brightness information are subjected to enhancement processing;
restoring the image brightness of a channel layer component comprising brightness information through a preset ILL network to obtain a second feature map;
and combining the first feature map and the second feature map to obtain an RGB three-channel image with enhanced image texture information and brightness information.
In a second aspect, the present invention also provides a micro-light image enhancement device based on spatial and frequency domains, including:
the first processing module is used for decomposing an RGB three-channel image to be processed into a three-channel image layer component comprising texture information and a one-channel image layer component comprising brightness information through a preset DEC network; the RGB three-channel image to be processed comprises a low-light image and a normal light image;
the second processing module is used for recovering and denoising the image texture information of the degraded layer component in the three-channel layer component comprising the texture information through a preset RES network to obtain the processed three-channel layer component comprising the texture information; obtaining a first feature map according to image texture information in three-channel layer components including texture information after the one-channel layer components including brightness information are subjected to enhancement processing;
the third processing module is used for recovering the image brightness of the layer component of the one channel image comprising the brightness information through a preset ILL network to obtain a second characteristic diagram;
and the fusion module is used for merging the first feature map and the second feature map to obtain an RGB three-channel image with enhanced image texture information and brightness information.
The invention has the beneficial effects that:
the invention provides a method and a device for enhancing a micro-light image based on space and frequency domains, which are used for adjusting the micro-light image by utilizing the space and frequency domains, so that the information loss of the image in the space and frequency domains is reduced.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flow chart of a method for enhancing a low-light level image based on space and frequency domains according to an embodiment of the present invention;
FIG. 2 is another flow chart of a method for enhancing a microimage based on spatial and frequency domains according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of trichromatic theory and color constancy provided by an embodiment of the present invention;
fig. 4 is a schematic diagram of the three-color theory and color constancy application provided by an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Referring to fig. 1, fig. 1 is a flowchart of a method for enhancing a micro-light image based on a space and a frequency domain according to an embodiment of the present invention, where the method for enhancing a micro-light image based on a space and a frequency domain includes:
s101, decomposing an RGB three-channel image to be processed into a three-channel image layer component comprising texture information and a one-channel image layer component comprising brightness information through a preset DEC network; the RGB three-channel image to be processed comprises a low-light image and a normal light image.
Specifically, in this embodiment, the preset DEC network is a dual-branch network composed of a U-Net network and a convolutional neural network, and independent and shared network parameters are adopted; the U-Net network outputs a three-channel layer component comprising texture information, and the convolutional neural network outputs a one-channel layer component comprising brightness information; specifically, in the process of decomposing an RGB three-channel image to be processed into a three-channel layer component comprising texture information and a one-channel layer component comprising brightness information through a preset DEC network, performing loss calculation on the three-channel layer component comprising texture information and the one-channel layer component comprising brightness information, and performing multiple iterations; wherein, the expression of the loss calculation is:
wherein I 1 Represents l1 norm, R twilight Three-channel layer component representing low-light image including texture information, R normal A three-channel layer component representing that the normal light image includes texture information,first derivative of a layer component of a channel representing a low-light image comprising luminance information,/and/or>First derivative of one-channel layer component representing normal light image including luminance information, I twilight Representing a low-light image, I normal Representing normal light images, and SSIM () representing a structural similarity loss function, which is an index for measuring the similarity of two images;
the expression for performing multiple iterations is:
R,L=DEC(I);
where DEC () represents the whole iterative process, and it can be understood that the above steps are repeated, R represents image texture information, L represents image brightness information, and I represents an image.
Referring to fig. 2, fig. 2 is another flow chart of a micro-light image enhancement method based on spatial and frequency domains according to an embodiment of the present invention, two sub-networks in a preset DEC network adopt independent and shared network parameters, and an independent dual-branch structure enables the two sub-networks to learn texture information and brightness information respectively. In addition, the jump connection layer in the network can solve the problem that the depth of the network model is increased and information is lost.
With continued reference to fig. 2, the preset DEC network employs a layer 3 UNet and convolutional neural network as the backbone network. The 3-layer UNet consists of a convolution layer, a pooling layer, a deconvolution layer and a ReLU activation function, and the convolution kernel size of the convolution layer is 3x3.MaxPooling32 and MaxPooling64 each represent a downsampling layer, decon & Concat64 and Decon & Concat32 each represent an upsampling layer, conv & ReLU32, conv & ReLU64 and Conv & ReLU128 each represent a convolution layer with a ReLU activation function, conv3 and Sigmoid3 together represent an output layer; the downsampling layer uses a 2 multiplied by 2 convolution check feature map to carry out global pooling to downsample so as to obtain feature maps with different sizes; the upsampling layer upsamples to the same size as the input features using a 2 x 2 convolution kernel and then concatenates with the input features. Concat64 in the convolutional neural network indicates that the convolutional layers are connected in a jump, conv & ReLU32 indicates the convolutional layer with a ReLU activation function, and Conv3 and Sigmoid3 collectively indicate the output layer.
S102, recovering and denoising image texture information of degraded layer components in three-channel layer components comprising the texture information through a preset RES network to obtain processed three-channel layer components comprising the texture information; and obtaining a first characteristic diagram according to the image texture information in the three-channel layer component including the texture information after the one-channel layer component including the brightness information is subjected to the enhancement processing.
Specifically, in this embodiment, the first feature map is obtained through the following procedure, specifically:
s1021, carrying out Fourier transform on the three-channel layer component comprising texture information, and converting space domain information of the three-channel layer component comprising the texture information into frequency domain information to obtain a matrix, wherein the size of the matrix is the same as that of an image to be processed; wherein, the expression of the Fourier transform is:
wherein F (u, v) represents pixels of the frequency domain image, F (x, y) represents pixels of the spatial domain image, and M, N represents a matrix size of the image;
each point in the matrix describes frequency domain information, and a complex number is obtained according to the frequency domain information of each point, and the expression is as follows:
m+jn;
wherein m represents a real part, n represents an imaginary part, j represents an imaginary unit, and its modulus valueIts direction arctanIndicating the phase angle;
simulating a high-pass filter, and obtaining enhanced frequency domain information by utilizing complex enhanced image edge contours obtained by the frequency domain information;
s1022, performing inverse Fourier transform on the enhanced frequency domain information to obtain processed three-channel layer components comprising texture information; wherein, the expression of the inverse Fourier transform is:
wherein F (u, v) represents a pixel of the frequency domain image, u, v represents coordinates of the pixel of the frequency domain image, F (x, y) represents a pixel of the spatial domain image, x, y represents coordinates of the pixel of the spatial domain image, and M, N represents a matrix size of the image;
s1023, converting a channel layer component comprising brightness information into reference information by utilizing Steve power law, wherein the expression is as follows:
s=kI n
wherein s represents the heart quantity, namely the intensity of guiding the recovery of texture information, k represents the constant quantity, n represents the stimulation intensity perceived by the network, and I represents the brightness information;
guiding and enhancing the texture information of the image by utilizing the reference information, wherein the expression is as follows:
wherein,representing the final enhanced texture information, i.e. the first feature map, R h Representing the primarily enhanced texture information.
In the process of guiding and enhancing the texture information of the image by utilizing the reference information, carrying out loss calculation on three-channel layer components containing the texture information, and carrying out multiple iterations; wherein, the expression of the loss calculation is:
wherein I 2 Represents l2 norm, R normal Three-way layer components representing that the normal-light image contains texture information,three-channel layer component containing texture information representing enhanced micro-light image, ++>First derivative of three-channel layer component representing that the normal illumination image contains texture information, +.>The SSIM () represents a structural similarity loss function, which is an index for measuring the similarity of two images;
the expression for performing multiple iterations is:
where RES () represents the entire iterative process,representing the enhanced texture information after n iterations, s representing the intensity level of texture recovery.
In this embodiment, after passing through the DEC network, a one-channel layer component including luminance information and a three-channel layer component including texture information are input into the RES network. Firstly, the three-channel layer component including texture information performs information exchange between a spatial domain and a frequency domain, and combines amplitude and phase information in the frequency domain to perform enhancement and denoising of the texture information, and then, the one-channel layer component including luminance information is used as reference information of the three-channel layer component including texture information to guide recovery of the texture information. The texture information of the low-light image is continuously close to the texture information of the normal illumination image, and three-channel image layer components containing the texture information are guided to be recovered and updated continuously. Thus, the RES network can be regarded as a non-linear pixel map, so that three channel layer components containing texture information effectively learn the texture information distribution weight of the normal illumination image, reduce image degradation, and generate layer components with higher signal-to-noise ratio.
With continued reference to fig. 2, the RES network of the present embodiment has four downsampling layers and four upsampling layers, each followed by a ReLU activation function. The internal channels of each block double after each downsampling block and alternate between 32 and 256 after each upsampling layer halving. The first Concat4 represents the RES network input, maxPooling32, maxPooling64, maxPooling128, maxPooling256 each represent the downsampling layer, decon & Concat512, decon & Concat256, decon & Concat128, decon & Concat64 each represent the upsampling layer, conv & ReLU32, conv & ReLU64, conv & ReLU128, conv & ReLU256, and Conv & ReLU512 each represent the convolution layer with ReLU activation function, and Conv3 and Sigmoid3 together represent the final output layer. All convolutional layers use a kernel size of 3x3, step size of 1 and padding of 1. The tail uses Sigmoid activation functions.
S103, restoring the image brightness of the one-channel layer component comprising the brightness information through a preset ILL network to obtain a second characteristic diagram.
Specifically, in this embodiment, the second feature map is obtained through the following procedure, specifically:
s1031, comparing a channel layer component of the low-light image including brightness information with a channel layer component of the normal illumination image including brightness information to obtain a difference image, wherein the expression is as follows:
wherein,representing the difference between a channel layer component of a low-light image comprising luminance information and a channel layer component of a normal-light image comprising luminance information, L twilight One-channel layer component, L, representing a low-light image including luminance information normal A channel layer component representing that the normal illumination image includes luminance information;
s1032, adjusting a layer component of a channel image of the low-light image including brightness information by using the difference map to obtain a second feature map, specifically:
comparing the difference image with brightness information of the low-light image and the normal image, wherein the expression is as follows:
wherein L is 1 A first supervision representing a low-light image,representing the relative difference between the brightness of the low-light image and the normal-light image, L twilight One-channel layer component, L, representing a low-light image containing luminance information normal One-channel layer component representing that the normal illumination image contains brightness information, epsilon represents a minimum value, and epsilon represents a minimum value 2 Represents the l2 norm;
controlling illumination smoothing and reducing halation artifacts by using a difference map and a first derivative of a layer component of a channel of the low-light image including luminance information, expressed as:
wherein L is 2 A second supervision representing a low-light image,representing the first derivative of the difference map, +.>First derivative of one-channel layer component representing low-light image including luminance information 1 Represents the l1 norm;
first supervision L using low-light images 1 And a second supervision L of the low-light image 2 Performing loss calculation as a loss function, and performing multiple iterations to obtain a second feature map; wherein, the expression of the multiple iterations is:
where ILL () represents the entire iterative process,representing the enhanced texture information after n iterations, i.e. the second feature map, l represents the magnitude of the reflectivity gray scale intensity.
In this embodiment, after passing through the DEC network, a channel layer component containing luminance information is input into the dynamic ILL network. And (3) performing brightness difference comparison by using brightness information of each of the low-light-level image and the normal illumination image to obtain a difference image, obtaining the brightness degradation degree of the low-light-level image through the difference image, and performing brightness enhancement by using the difference image.
With continued reference to FIG. 2, the ILL network of the present invention employs a common CNN having 3 convolution layers, each followed by a ReLU activation function, concat64 representing the input to the ILL network, conv & ReLU32 representing the convolution layer with the ReLU activation function, conv1 and Sigmoid1 together representing the final output layer. All convolutional layers use a kernel size of 3x3, step size of 1 and padding of 1. The tail uses Sigmoid activation functions.
S104, combining the first feature map and the second feature map to obtain an RGB three-channel image with enhanced image texture information and brightness information.
Specifically, in this embodiment, the network model with updated parameters is combined, and texture information and brightness information are combined according to the three-color theory and color constancy, so that a four-channel image is changed into a normal RGB three-channel image; the method comprises the following steps:
multiplying the first feature map and the second feature map pixel by pixel to obtain an RGB three-channel image with enhanced image texture information and brightness information; the expression of pixel-by-pixel multiplication is as follows:
I(x,y)=L(x,y)*R(x,y);
wherein I (x, y) represents an RGB three-channel image enhanced in image texture information and luminance information, L (x, y) represents a first feature map, and R (x, y) represents a second feature map.
In this embodiment, the three-color theory and the color constancy are utilized to decompose the low-light-level image into a three-channel layer component containing texture information and a one-channel layer component containing brightness information, and the result enhanced by the dual-branch network is used to perform image synthesis, so that the brightness of the low-light-level image is improved and the degradation of the low-light-level image is reduced.
Referring to fig. 3, fig. 3 is a schematic diagram of three-color theory and color constancy according to an embodiment of the present invention, wherein the embodiment uses the three-color theory and color constancy to perform image decomposition. An image may be decomposed into a texture component and a luminance component, the reflection component mainly comprising texture information and the luminance component mainly comprising luminance information.
Referring to fig. 4, fig. 4 is a schematic diagram of three-color theory and color constancy application provided by the embodiment of the present invention, in which a mathematical model is built by using the three-color theory and color constancy, and complex multiplication is converted into addition, so as to reduce the computational complexity.
In summary, the present invention provides a method for enhancing a micro-light image based on spatial and frequency domains, which provides a dual-branch micro-light image enhancement network based on phase enhancement, wherein the micro-light image is decomposed into a brightness component and a reflection component, the brightness of the image is adjusted through the spatial domain, and meanwhile, the textures of the image are enhanced and denoised in the spatial and frequency domains, so as to reduce the degradation of the image. The dual-branch network provides a divide-and-conquer enhancement method, and extracts information from the spatial and frequency domains for low-light image enhancement.
Based on the same inventive concept, the present invention further provides a micro-light image enhancement device based on a space and a frequency domain, which is applied to the micro-light image enhancement method based on a space and a frequency domain provided by the above embodiment of the present invention, and reference is made to the above description of the embodiment of the micro-light image enhancement method, and the details are not repeated here; the device comprises:
the first processing module is used for decomposing an RGB three-channel image to be processed into a three-channel image layer component comprising texture information and a one-channel image layer component comprising brightness information through a preset DEC network; the RGB three-channel image to be processed comprises a low-light image and a normal light image;
the second processing module is used for recovering and denoising the image texture information of the degraded layer component in the three-channel layer component comprising the texture information through a preset RES network to obtain the processed three-channel layer component comprising the texture information; obtaining a first feature map according to image texture information in three-channel layer components including texture information after the one-channel layer components including brightness information are subjected to enhancement processing;
the third processing module is used for recovering the image brightness of the layer component of the one channel image comprising the brightness information through a preset ILL network to obtain a second characteristic diagram;
and the fusion module is used for merging the first feature map and the second feature map to obtain an RGB three-channel image with enhanced image texture information and brightness information.
In summary, the present invention provides a micro-light image enhancement device based on spatial and frequency domains, and provides a dual-branch micro-light image enhancement network based on phase enhancement, which decomposes a micro-light image into a brightness component and a reflection component, adjusts the brightness of the image through the spatial domain, and enhances and denoises the texture of the image in the spatial and frequency domains, thereby reducing the degradation of the image. The dual-branch network provides a divide-and-conquer enhancement method, and extracts information from the spatial and frequency domains for low-light image enhancement.
It should be noted that in this document relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in an article or apparatus that comprises the element. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The orientation or positional relationship indicated by "upper", "lower", "left", "right", etc. is based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description and to simplify the description, and is not indicative or implying that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be construed as limiting the invention.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Further, one skilled in the art can engage and combine the different embodiments or examples described in this specification.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (9)

1. A method for enhancing a low-light image based on spatial and frequency domains, comprising:
decomposing an RGB three-channel image to be processed into a three-channel layer component comprising texture information and a one-channel layer component comprising brightness information through a preset DEC network; the RGB three-channel image to be processed comprises a low-light image and a normal light image;
restoring and denoising image texture information of the degraded layer component in the three-channel layer component comprising the texture information through a preset RES network to obtain a processed three-channel layer component comprising the texture information; strengthening image texture information in the processed three-channel layer component comprising texture information according to the one-channel layer component comprising brightness information to obtain a first feature map;
restoring the image brightness of the image layer component of the channel comprising the brightness information through a preset ILL network to obtain a second characteristic diagram;
and combining the first characteristic diagram and the second characteristic diagram to obtain an RGB three-channel image with enhanced image texture information and brightness information.
2. The method for enhancing a micro-light image based on space and frequency domain according to claim 1, wherein the recovering and denoising of image texture information is performed on the degraded layer component in the three-channel layer component including texture information through a preset RES network, so as to obtain a processed three-channel layer component including texture information, which includes:
performing Fourier transform on the three-channel layer component comprising the texture information, and converting the spatial domain information of the three-channel layer component comprising the texture information into frequency domain information to obtain a matrix; wherein each point in the matrix describes frequency domain information, and a complex number is obtained according to the frequency domain information of each point, and the expression is as follows:
m+jn;
wherein m represents a real part, n represents an imaginary part, and j represents an imaginary unit;
simulating a high-pass filter, and obtaining enhanced frequency domain information by utilizing complex enhanced image edge contours obtained by the frequency domain information;
and performing inverse Fourier transform on the enhanced frequency domain information to obtain the processed three-channel layer component comprising texture information.
3. The method for enhancing a micro-optic image based on space and frequency domain according to claim 1, wherein said enhancing image texture information in the processed three-channel layer component including texture information according to the one-channel layer component including luminance information to obtain a first feature map comprises:
converting the one-channel layer component including luminance information into reference information using steve's power law, the expression of which is:
s=kI n
wherein s represents the heart quantity, namely the intensity of guiding the recovery of texture information, k represents the constant quantity, n represents the stimulation intensity perceived by the network, and I represents the brightness information;
guiding and enhancing the texture information of the image by utilizing the reference information, wherein the expression is as follows:
wherein,representing the final enhanced texture information, i.e. the first feature map, R h Representing the primarily enhanced texture information.
4. A method of enhancing a micro-image based on spatial and frequency domains as claimed in claim 3, wherein in the process of guiding and enhancing texture information of an image using the reference information, the three-channel layer component containing the texture information is subjected to loss calculation and iterated a plurality of times; wherein, the expression of the loss calculation is:
wherein I 2 Represents l2 norm, R normal Three-way layer components representing that the normal-light image contains texture information,three-channel layer component containing texture information representing enhanced micro-light image, ++>First derivative of three-channel layer component representing that the normal illumination image contains texture information, +.>Representing the first derivative of three-channel layer components containing texture information after the microimage enhancement, SSIM () representing the structural similarity loss;
the expression for performing multiple iterations is:
where RES () represents the entire iterative process,representing the enhanced texture information after n iterations, s representing the intensity level of texture recovery.
5. The method for enhancing a micro-optical image based on space and frequency domain according to claim 1, wherein the recovering, through a preset ILL network, the image brightness of the one-channel layer component including brightness information to obtain a second feature map includes:
comparing a channel layer component of the low-light image including brightness information with a channel layer component of the normal illumination image including brightness information to obtain a difference image, wherein the expression is as follows:
wherein,representing the difference between a channel layer component of a low-light image comprising luminance information and a channel layer component of a normal-light image comprising luminance information, L twilight One-channel layer component, L, representing a low-light image including luminance information normal A channel layer component representing that the normal illumination image includes luminance information;
and adjusting a channel layer component of the low-light image including brightness information by using the difference map to obtain a second characteristic map.
6. The method of claim 5, wherein said adjusting a layer component of a channel of the low-light image including luminance information using the difference map comprises:
comparing the difference map with brightness information of the low-light image and the normal image, wherein the expression is as follows:
wherein L is 1 A first supervision representing a low-light image,representing the relative difference between the brightness of the low-light image and the normal-light image, L twilight One-channel layer component, L, representing a low-light image containing luminance information normal One-channel layer component representing that the normal illumination image contains brightness information, epsilon represents a minimum value, and epsilon represents a minimum value 2 Represents the l2 norm;
controlling illumination smoothing by using the difference map and a first derivative of a layer component of a channel of the low-light image including luminance information, wherein the expression is:
wherein L is 2 A second supervision representing a low-light image,representing the first derivative of the difference map, +.>First derivative of one-channel layer component representing low-light image including luminance information 1 Represents the l1 norm;
a first supervision L using the low-light image 1 And a second supervision L of the low-light image 2 Performing loss calculation as a loss function, and performing multiple iterations to obtain the second feature map; wherein, the expression of the multiple iterations is:
where ILL () represents the entire iterative process,and representing the enhanced texture information after n iterations, namely the second characteristic diagram, wherein l represents the magnitude of the reflectivity gray scale intensity.
7. The method for enhancing a micro-light image based on space and frequency domain according to claim 1, wherein the combining the first feature map and the second feature map to obtain the RGB three-channel image with enhanced image texture information and brightness information comprises:
multiplying the first characteristic image and the second characteristic image pixel by pixel to obtain an RGB three-channel image with enhanced image texture information and brightness information; the expression of pixel-by-pixel multiplication is as follows:
I(x,y)=L(x,y)*R(x,y);
wherein I (x, y) represents an RGB three-channel image enhanced in image texture information and luminance information, L (x, y) represents a first feature map, and R (x, y) represents a second feature map.
8. The method according to claim 1, wherein in the process of decomposing the RGB three-channel image to be processed into a three-channel layer component including texture information and a one-channel layer component including luminance information through a preset DEC network, a loss calculation is performed on the three-channel layer component including texture information and the one-channel layer component including luminance information, and a plurality of iterations are performed; wherein, the expression of the loss calculation is:
wherein I 1 Represents l1 norm, R twilight Representing a low-light image including textureThree-way layer component of information, R normal A three-channel layer component representing that the normal light image includes texture information,first derivative of a layer component of a channel representing a low-light image comprising luminance information,/and/or>First derivative of one-channel layer component representing normal light image including luminance information, I twilight Representing a low-light image, I normal Representing a normal light image, SSIM () represents a structural similarity loss function;
the expression for performing multiple iterations is:
R,L=DEC(I);
where DEC () represents the entire iterative process, R represents image texture information, L represents image luminance information, and I represents an image.
9. A low-light level image enhancement device based on spatial and frequency domains, comprising:
the first processing module is used for decomposing an RGB three-channel image to be processed into a three-channel image layer component comprising texture information and a one-channel image layer component comprising brightness information through a preset DEC network; the RGB three-channel image to be processed comprises a low-light image and a normal light image;
the second processing module is used for recovering and denoising the image texture information of the degraded layer component in the three-channel layer component comprising the texture information through a preset RES network to obtain the processed three-channel layer component comprising the texture information; strengthening image texture information in the processed three-channel layer component comprising texture information according to the one-channel layer component comprising brightness information to obtain a first feature map;
the third processing module is used for recovering the image brightness of the layer component of the channel image comprising the brightness information through a preset ILL network to obtain a second characteristic diagram;
and the fusion module is used for merging the first feature image and the second feature image to obtain an RGB three-channel image with enhanced image texture information and brightness information.
CN202311443314.4A 2023-11-01 2023-11-01 Method and device for enhancing low-light-level image based on space and frequency domain Pending CN117437160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311443314.4A CN117437160A (en) 2023-11-01 2023-11-01 Method and device for enhancing low-light-level image based on space and frequency domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311443314.4A CN117437160A (en) 2023-11-01 2023-11-01 Method and device for enhancing low-light-level image based on space and frequency domain

Publications (1)

Publication Number Publication Date
CN117437160A true CN117437160A (en) 2024-01-23

Family

ID=89551215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311443314.4A Pending CN117437160A (en) 2023-11-01 2023-11-01 Method and device for enhancing low-light-level image based on space and frequency domain

Country Status (1)

Country Link
CN (1) CN117437160A (en)

Similar Documents

Publication Publication Date Title
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN111080541B (en) Color image denoising method based on bit layering and attention fusion mechanism
CN112001843B (en) Infrared image super-resolution reconstruction method based on deep learning
CN111738948B (en) Underwater image enhancement method based on double U-nets
CN113052814B (en) Dim light image enhancement method based on Retinex and attention mechanism
CN113284061B (en) Underwater image enhancement method based on gradient network
CN117011194B (en) Low-light image enhancement method based on multi-scale dual-channel attention network
CN114219722A (en) Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing
Ma et al. Underwater image restoration through a combination of improved dark channel prior and gray world algorithms
CN114565539B (en) Image defogging method based on online knowledge distillation
CN115511708A (en) Depth map super-resolution method and system based on uncertainty perception feature transmission
CN115035011A (en) Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy
Liu et al. Toward visual quality enhancement of dehazing effect with improved Cycle-GAN
CN112200719B (en) Image processing method, electronic device, and readable storage medium
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
Oh et al. Residual dilated u-net with spatially adaptive normalization for the restoration of under display camera images
CN117437160A (en) Method and device for enhancing low-light-level image based on space and frequency domain
CN116266336A (en) Video super-resolution reconstruction method, device, computing equipment and storage medium
CN114926359A (en) Underwater image enhancement method combining bicolor space recovery and multistage decoding structure
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
CN115705616A (en) True image style migration method based on structure consistency statistical mapping framework
CN114830168A (en) Image reconstruction method, electronic device, and computer-readable storage medium
CN117173052A (en) Low-illumination image enhancement method based on unsupervised generation countermeasure network
CN116433508B (en) Gray image coloring correction method based on Swin-Unet
Song et al. NRNet: Retinex Decomposition with Realistic Noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination