CN114998120B - Dim light image optimization training method, intelligent terminal and computer readable storage medium - Google Patents

Dim light image optimization training method, intelligent terminal and computer readable storage medium Download PDF

Info

Publication number
CN114998120B
CN114998120B CN202210536653.6A CN202210536653A CN114998120B CN 114998120 B CN114998120 B CN 114998120B CN 202210536653 A CN202210536653 A CN 202210536653A CN 114998120 B CN114998120 B CN 114998120B
Authority
CN
China
Prior art keywords
image
loss function
initial
illumination
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210536653.6A
Other languages
Chinese (zh)
Other versions
CN114998120A (en
Inventor
王晓晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiaopai Technology Co ltd
Original Assignee
Shenzhen Xiaopai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiaopai Technology Co ltd filed Critical Shenzhen Xiaopai Technology Co ltd
Priority to CN202210536653.6A priority Critical patent/CN114998120B/en
Publication of CN114998120A publication Critical patent/CN114998120A/en
Application granted granted Critical
Publication of CN114998120B publication Critical patent/CN114998120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dim light image optimization training method, an intelligent terminal and a computer readable storage medium, which comprises the following steps: inputting the image data set to be trained into an initial image decomposition network module to obtain an initial decomposition image set; determining a first loss function and a second loss function of the image data set to be trained according to the initial decomposition image set; inputting the dim light line image set in the initial decomposed image set to an initial restoration module to obtain an initial restoration image set; determining a third loss function and a fourth loss function of the image data set to be trained according to the initial restoration image set and the initial decomposition image set; and obtaining a trained image decomposition network module according to the first loss function and the second loss function, and obtaining a trained recovery module according to the third loss function and the fourth loss function. According to the invention, the dark light line image can be optimized, so that the image is more close to the image under normal light.

Description

Dim light image optimization training method, intelligent terminal and computer readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a dim light image optimization training method, an intelligent terminal, and a computer readable storage medium.
Background
Besides the effect of shooting by the camera, the brightness of the ambient light can also seriously influence the image quality. When the ambient light is too dark, the images shot by the cameras cannot be clearly distinguished, so that great difficulty is brought to security workers, in addition, the requirements of people on mobile shooting equipment such as mobile phones and cameras are higher and higher, the images shot in the dark light environment are often unsatisfactory, and therefore the method has important significance in researching the technology of optimizing and training the dark light images. However, the existing technology for optimizing and training the dim light image has the technical defects of high cost and poor image processing effect.
Disclosure of Invention
The invention provides a dim light image optimization training method, an intelligent terminal and a computer readable storage medium, and aims to solve the technical problems of high technical cost and poor image processing effect of the existing dim light image optimization training.
In order to achieve the above object, the present invention provides a dim light image optimization training method, which includes the following steps:
inputting the image data set to be trained into an initial image decomposition network module to obtain an initial decomposition image set;
determining a first loss function and a second loss function of the image data set to be trained according to the initial decomposition image set;
inputting the dim light line image set in the initial decomposed image set to an initial restoration module to obtain an initial restoration image set;
determining a third loss function and a fourth loss function of the image data set to be trained according to the initial restoration image set and the initial decomposition image set;
and obtaining a trained image decomposition network module according to the first loss function and the second loss function, and obtaining a trained recovery module according to the third loss function and the fourth loss function.
Optionally, the image dataset to be trained includes a label image and a darkness ray image; the initial decomposition image set comprises a tag reflected light image and a tag illumination image corresponding to the tag image, and a dim light reflected light image and a dim light illumination image corresponding to the dim light image; the step of determining a first loss function and a second loss function of the image dataset to be trained comprises:
a first loss function between the tag reflected light image and the tag illuminated image is determined, and a second loss function between the dark light reflected light image and the dark light illuminated image is determined.
Optionally, the set of dark light line images includes the dark light reflected light image and the dark light illumination image; the initial restoration module comprises an initial reflection restoration module and an initial illumination adjustment module; the initial set of restored images includes a first restored image and a second restored image; the step of inputting the dark light line image set in the initial decomposed image set to an initial restoration module to obtain an initial restored image set includes:
inputting the dark-ray reflected light image to the initial reflection restoration module to obtain the first restoration image, and inputting the dark-ray illumination image to the initial illumination adjustment module to obtain the second restoration image;
optionally, the step of determining a third loss function and a fourth loss function of the image dataset to be trained comprises:
a third loss function between the first restored image and the label reflected light image is determined, and a fourth loss function between the second restored image and the label illumination image is determined.
Optionally, the trained recovery module includes a trained reflection recovery module and a trained illumination adjustment module; the step of obtaining the trained recovery module according to the third loss function and the fourth loss function comprises the following steps:
obtaining the trained reflection restoration module according to the third loss function, and obtaining the trained illumination adjustment module according to the fourth loss function;
optionally, after the step of obtaining the trained recovery module according to the third loss function and the fourth loss function, the method includes:
inputting the first restored image and the second restored image into a preset initial brightness adjustment curve to obtain a reconstructed image corresponding to the dark-light image;
determining a fifth loss function between the reconstructed image and the label image;
and optimizing the initial brightness adjustment curve according to the fifth loss function to obtain a trained brightness adjustment curve.
Optionally, the step of determining a fifth loss function between the reconstructed image and the label image comprises:
determining a regularization function between the reconstructed image and the label image, and determining a structural similarity loss function between the reconstructed image and the label image, and determining a color loss function between the reconstructed image and the label image;
and taking a weighted sum among the regularization function, the structural similarity loss function and the color loss function as a fifth loss function.
Optionally, the step of determining a first loss function between the tag reflected light image and the tag illumination image and determining a second loss function between the darkline reflected light image and the darkline illumination image comprises:
determining a first regularization function between the tag reflected light image and the tag illumination image, taking the first regularization function as a first loss function, determining a second regularization function between the dark light reflected light image and the dark light illumination image, and taking the second regularization function as a second loss function;
optionally, the step of determining a third loss function between the first restored image and the label reflected light image includes:
determining a regularization function between the first restored image and the tag reflected light image, and determining a structural similarity loss function between the first restored image and the tag reflected light image;
and taking the weighted sum between the regularization function and the structural similarity loss function as a third loss function.
Optionally, the step of determining a fourth loss function between the second restored image and the label illumination image comprises:
determining a regularization function between the tag illumination image and the second restored image, and determining a gradient loss function between the tag illumination image and the second restored image based on the regularization function;
and taking the weighted sum between the regularization function and the loss function as a fourth loss function.
Optionally, after the step of optimizing the initial brightness adjustment curve according to the fifth loss function to obtain a trained brightness adjustment curve, the method includes:
inputting the dim light line image to be optimized to the trained image decomposition network module to obtain a reflected light image to be optimized and an illumination image to be optimized;
inputting the reflected light image to be optimized to the trained reflection restoration module to obtain a reflected restoration image to be optimized, and inputting the illumination image to be optimized to the trained illumination adjustment module to obtain an illumination restoration image to be optimized;
and inputting the reflection restoration image to be optimized and the illumination restoration image to be optimized into the brightness adjustment curve after training to obtain a target effect image.
In addition, in order to achieve the above object, the present invention also provides an intelligent terminal, which includes a memory, a processor, and a dim light image optimization training program stored on the memory and executable on the processor, wherein: the darkness image optimization training routine, when executed by the processor, implements the steps of the darkness image optimization training method as described above.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a darkness image optimization training program which, when executed by a processor, implements the steps of the darkness image optimization training method as described above.
The dim light image optimization training method is based on a simple convolutional neural network, a tag image and a dim light line image are decomposed through an initial image decomposition network module to distinguish respective reflected light images and illumination images, a first loss function between the tag reflected light images and the tag illumination images is determined, and a second loss function between the dim light line reflected light images and the dim light illumination images is determined, so that the initial image decomposition network module is subjected to iterative optimization to obtain a trained decomposition network module; according to the determined third loss function between the first restored image and the tag reflected light image and the determined fourth loss function between the tag illumination image and the second restored image, the initial reflection restoration module is subjected to iterative optimization through the third loss function to obtain a trained reflection restoration module, the initial illumination adjustment module is subjected to training through the fourth loss function to obtain the trained illumination adjustment module, and then only the dim light line image needing to be optimized is input into the trained image decomposition network module, the reflection restoration module and the illumination adjustment module to obtain an image which is closer to normal light, and the obtained image has more details, less noise and more real color.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment of an intelligent terminal according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of the dim light image optimization training method according to the present invention;
FIG. 3 is a schematic diagram of a model structure of an initial image decomposition network module according to a first embodiment of the dim light image optimization training method of the present invention;
FIG. 4 is a schematic diagram of an overall frame flow involved in a first embodiment of the dim light image optimization training method according to the present invention;
fig. 5 is a schematic diagram of an overall frame flow involved in a second embodiment of the dim light image optimization training method according to the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention is generally described as follows: decomposing the image into a reflected light image and an illumination image, namely: the original space of the image is decoupled into two smaller spaces, so that regularization learning can be better realized by the model, an image decoupling network model (an initial image decomposing network module) is designed, an acquired image pair (a dark ray image and a label image) is decomposed into a corresponding reflected light image and an illumination image, then a reflected light image restoration module and an illumination adjusting module are designed for respectively carrying out model training on the reflected light image and the illumination image of the dark ray, an output reflected light image and an illumination image are obtained, and the illumination adjusting module can realize free adjustment of the illumination image. And then restored by performing a dot product operation on the image sum. Finally, a brightness adjustment curve is designed to conduct fine adjustment learning on the image, fine adjustment enhancement of pixel level is achieved, on one hand, generalization capability of a model can be improved, on the other hand, network parameters can be better optimized, and the output image is enabled to be more approximate to a target image (normal light image).
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal structure of a hardware operating environment of an intelligent terminal according to an embodiment of the present invention.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a control panel, and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WLAN interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above. A darkness image optimization training program may be included in the memory 1005 as a computer storage medium.
Optionally, the terminal may also include a microphone, speaker, RF (Radio Frequency) circuitry, sensors, audio circuitry, wireless modules, etc. Among them, sensors such as infrared sensors, distance sensors, and other sensors are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the dim light image optimization training method according to the present invention, and in this embodiment, the method includes:
step S10, inputting an image data set to be trained into an initial image decomposition network module to obtain an initial decomposition image set;
in this embodiment, the image dataset to be trained is an already classified and labeled image dataset, the image dataset to be trained includes a tag image and a darkness light image, and the initial decomposition image dataset includes a tag reflected light image and a tag illumination image corresponding to the tag image, and a darkness light reflected light image and a darkness light illumination image corresponding to the darkness light image.
The source of the image dataset to be trained may be a supervised initial dataset, such as the network public dataset GLAD, LOL, LSSR. And screening and regularly naming the initial data set to enable the dark ray image and the label image to form a one-to-one correspondence, randomly dividing the data set into a training set, a testing set and a verification set according to the ratio of 6:3:1, thereby obtaining an image data set to be trained, and using the image data set to be trained for model training. The label image is a normal light image under normal light.
The initial image decomposition network module is an initial model module built on the basis of a convolutional neural network, a dark ray image and a label image are decoupled into a reflected light image and an illumination image through a deep learning convolutional neural network method, the model structure of the initial image decomposition network module is shown in fig. 3, a normal ray image and a corresponding dark ray image are input into a simple CNN (convolutional neural network) convolutional network, and the normal ray image (label image) is decoupled into reflected light R_normal (label reflected light image), reflected light R_low (dark ray reflected light image), illumination L_normal (label illumination image) and illumination L_low (dark ray illumination image) through an activation function respectively. The initial image decomposition network module is a model built based on the convolutional neural network and the activation function.
Step S20, determining a first loss function and a second loss function of the image data set to be trained according to the initial decomposition image set;
for a clearer understanding of the present embodiment, referring to fig. 4, in fig. 4, the initial model module is continuously optimally adjusted by calculating Loss1 (first Loss function) between r_normal (tag reflected light image) and r_low (dark light reflected light image), so that the similarity between the dark light reflected light image and the tag reflected light image is increasingly close until no further optimal adjustment is possible.
Calculating Loss2 (second Loss function) between the illumination L_normal (label illumination image) and the illumination L_low (dark illumination image) continuously optimizes and adjusts the initial model module, so that the similarity between the dark illumination image and the label illumination image is more and more similar until the optimization and adjustment cannot be further performed.
Step S30, inputting the dim light line image set in the initial decomposed image set to an initial restoration module to obtain an initial restoration image set;
the dark ray image set comprises the dark ray reflected light image and the dark ray illumination image; the initial restoration module comprises an initial reflection restoration module and an initial illumination adjustment module; the initial set of restored images includes a first restored image and a second restored image.
Specifically, the step S30 includes:
and inputting the dark-ray reflected light image to the initial reflection restoration module to obtain the first restoration image, and inputting the dark-ray illumination image to the initial illumination adjustment module to obtain the second restoration image.
With continued reference to fig. 4, the reflection restoration module (initial reflection restoration module) is a network module based on the uiet structure, preferably based on the uiet 3+ version structure. The Unet3+ version has the characteristics of strong feature extraction capability and small model parameter, and R_R (first restoration image) can be obtained quickly by inputting a dark ray reflected light image into an initial reflection restoration module based on the Unet3+, and the first restoration image has more image details.
The illumination adjusting module (initial illumination adjusting module) is used for extracting illumination image characteristics of the dark-ray illumination image based on a deep learning CNN convolution model, taking the illumination image under dark rays as input, carrying out back propagation on an output result L_A (second recovery image) and a label illumination image calculation loss function (fourth loss function), and carrying out iterative optimization, so that the trained model can recover the illumination image under normal rays from the dark-ray illumination image.
Step S40, determining a third loss function and a fourth loss function of the image data set to be trained according to the initial restored image set and the initial decomposed image set;
with continued reference to fig. 4, loss3 (third Loss function) between the first restored image and the tag reflected light image is calculated for back propagation, iterative optimization. And calculating Loss4 (fourth Loss function) between the label illumination image and the second recovery image for back propagation and iterative optimization.
And step S50, obtaining a trained image decomposition network module according to the first loss function and the second loss function, and obtaining a trained recovery module according to the third loss function and the fourth loss function.
The trained restoration module comprises a trained reflection restoration module and a trained illumination adjustment module.
And carrying out iterative optimization on the initial image decomposition network module according to the first loss function and the second loss function until the loss function curve does not drop and has no optimization space, thereby obtaining the trained image decomposition network module. And similarly, performing iterative optimization on the initial reflection restoration module according to the third loss function until the loss function curve does not drop and has no optimization space, thereby obtaining the trained reflection restoration module. And carrying out iterative optimization on the initial illumination adjustment module according to the fourth loss function until the loss function curve does not drop and has no optimization space, thereby obtaining the trained illumination adjustment module. Therefore, the dark light line image to be optimized can be input into the image decomposition network module based on the trained image decomposition network module, the reflection restoration module and the illumination adjustment module, and then is sequentially and simultaneously transmitted to the reflection restoration module and the illumination adjustment module to obtain an output image with a normal light effect, the output image is more excellent in quality, the texture is clearer, more details can be reserved, and less noise is generated.
The dim light image optimization training method is based on a simple convolutional neural network, a tag image and a dim light line image are decomposed through an initial image decomposition network module to distinguish respective reflected light images and illumination images, a first loss function between the tag reflected light images and the tag illumination images is determined, and a second loss function between the dim light line reflected light images and the dim light illumination images is determined, so that the initial image decomposition network module is subjected to iterative optimization to obtain a trained decomposition network module; according to the determined third loss function between the first restored image and the tag reflected light image and the determined fourth loss function between the tag illumination image and the second restored image, the initial reflection restoration module is subjected to iterative optimization through the third loss function to obtain a trained reflection restoration module, the initial illumination adjustment module is subjected to training through the fourth loss function to obtain the trained illumination adjustment module, and then only the dim light line image needing to be optimized is input into the trained image decomposition network module, the reflection restoration module and the illumination adjustment module to obtain an image which is closer to normal light, and the obtained image has more details, less noise and more real color.
As shown in fig. 5, fig. 5 is a schematic diagram of an overall frame flow involved in a second embodiment of the dim light image optimization training method according to the present invention. Further, a second embodiment of the dim light image optimization training method according to the present invention is provided based on the first embodiment of the dim light image optimization training method according to the present invention, in this embodiment, after the step of obtaining the trained recovery module according to the third loss function and the fourth loss function, the method includes:
step a, inputting the first restored image and the second restored image into a preset initial brightness adjustment curve to obtain a reconstructed image corresponding to the dark-light image;
step b, determining a fifth loss function between the reconstructed image and the label image;
and c, optimizing the initial brightness adjustment curve according to the fifth loss function to obtain a trained brightness adjustment curve.
Referring to fig. 5, fig. 5 is a schematic view of fig. 4, in which an illumination adjustment curve (initial brightness adjustment curve) is added, and the brightness adjustment curve is used to fine-tune the images output by the reflection restoration module and the illumination adjustment module, so that the final output image of the trained brightness adjustment curve has more real colors and ensures the balance of the brightness of the image.
Specifically, the expressions corresponding to the initial brightness adjustment curve and the trained brightness adjustment curve are LE (I, A) =I+AI (1-I),
the image parameters input to the initial brightness adjustment curve are the I, the LE is the image parameters output by the initial brightness adjustment curve, and the A is a brightness fine adjustment matrix.
The design process of the expression corresponding to the brightness adjustment curve is as follows:
the brightness adjustment curve design needs to meet the following three properties:
(1) Preventing the value from being truncated, resulting in the loss of image information, and including the value of the image in the [0,1] interval.
(2) The curves may be back-propagated so that the network parameters may be updated.
(3) The curve is a monotonic higher-order (two) function, guaranteeing image contrast.
In combination with the above three properties, the brightness adjustment curve may be designed as follows:
LE(I,a)=I+aI(1-I) (1)
a in expression (1) represents an image dynamic adjustment range parameter, but here a can adjust the luminance of an image only from the global, in order to adjust the luminance of an image from the local, the expression (1) is modified as follows:
LE(I,A)=I+AI(1-I) (2)
that is, the expression (2) corresponding to the brightness adjustment curve is obtained, and since a is the brightness fine adjustment matrix of the pixel parameters of the input image, compared with the conventional brightness adjustment curve, the brightness adjustment of the image at the pixel level can be performed on the image, that is, the brightness adjustment of different local areas can be performed on the image, so that the finally output image more accords with the image under the normal light. And on the basis of the trained image decomposition network module, the reflection restoration module and the illumination adjustment module, the brightness adjustment curve input to the training can obtain the expected target effect image only by iterative optimization once.
The first restored image and the second restored image are input to an initial brightness adjustment curve for dot multiplication and reconstruction to obtain a reconstructed image corresponding to the dark-light image, or the reconstructed image is obtained by dot multiplication before the initial brightness adjustment curve is input and then input to the initial brightness adjustment curve.
In an embodiment, the step of determining a fifth loss function between the reconstructed image and the label image comprises:
determining a regularization function between the reconstructed image and the label image, and determining a structural similarity loss function between the reconstructed image and the label image, and determining a color loss function between the reconstructed image and the label image;
and taking a weighted sum among the regularization function, the structural similarity loss function and the color loss function as a fifth loss function.
Specifically, the regularization function may be an L2 regularization function, i.e., MSE (Mean Square Error ).
Referring to fig. 5, an L2 regularization function is used in the Loss5 Loss function (fifth Loss function) to scale the luminance adjustment image (bright ness Adjustment Image) and the label image, namely:the similarity between the two is measured by using a structural similarity loss function, namely: SSIM (b_e, i_normal), the foregoing Loss function, in addition to letting the training image and the target image be as similar as possible, is more desirable to train more details, while in the brightness adjustment curve, it is also desirable to train better brightness adjustment parameters, thus introducing a Color Loss (Color Loss) to calculate the similarity of the two images in Color, namely: l_color (b_e, i_normal). The total Loss of Loss5 can thus be expressed as:
the weighting coefficients may be set according to actual needs, and are not limited herein.
And carrying out iterative optimization on the initial brightness adjustment curve according to the Loss function of Loss5 until no optimization space exists, so that the trained brightness adjustment curve is obtained, and when the dim light line image which needs to be further optimized is used as the brightness adjustment curve of the input image after training, the output image with more uniform color and brightness can be obtained.
Further, a third embodiment of the dim light image optimization training method according to the present invention is provided based on the above embodiment of the dim light image optimization training method according to the present invention, in this embodiment, the step of determining the first loss function and the second loss function of the image dataset to be trained includes:
a first loss function between the tag reflected light image and the tag illuminated image is determined, and a second loss function between the dark light reflected light image and the dark light illuminated image is determined.
Specifically, the step of determining a first loss function between the tag reflected light image and the tag illumination image and determining a second loss function between the dim light reflected light image and the dim light illumination image includes:
determining a first regularization function between the tag reflected light image and the tag illumination image, taking the first regularization function as a first loss function, determining a second regularization function between the dark light reflected light image and the dark light illumination image, and taking the second regularization function as a second loss function.
The following examples are all continued with reference to fig. 5.
In this embodiment, the first regularization function and the second regularization function may be in the category of the L2 regularization function, where only the difference between the two regularization functions belongs to different correspondence.
The similarity between reflected light images is measured in the Loss of Loss1 function using the L2 regularization function, namely:the L2 regularization function is also used in the Loss of Loss2 function to measure similarity between illumination images, namely: />
Under the combined action of the first loss function and the second loss function, the initial image decomposition network module can be subjected to iterative optimization so as to obtain a trained image decomposition network module, and then a dim light image to be optimized is input into the trained image decomposition network module, so that an illumination image and a reflected light image which are closer to the normal light effect can be decomposed.
In an embodiment, the step of determining a third loss function and a fourth loss function of the image dataset to be trained comprises:
a third loss function between the first restored image and the label reflected light image is determined, and a fourth loss function between the second restored image and the label illumination image is determined.
Specifically, the step of obtaining the trained recovery module according to the third loss function and the fourth loss function includes:
obtaining the trained reflection restoration module according to the third loss function, and obtaining the trained illumination adjustment module according to the fourth loss function;
for the third loss function:
the step of determining a third loss function between the first restored image and the label reflected light image comprises:
determining a regularization function between the first restored image and the tag reflected light image, and determining a structural similarity loss function between the first restored image and the tag reflected light image;
and taking the weighted sum between the regularization function and the structural similarity loss function as a third loss function.
Specifically, the L2 regularization function is used in the Loss3 function to measure the similarity between the reflected restored image (first restored image) and the label reflected light image, namely:in addition, the similarity between the two is measured by using a structural similarity loss function, namely: SSIM (r_r, r_normal), loss3 function is a weighted sum of the two losses: />The weighting coefficients may be set according to actual needs, and are not limited herein.
Through the third loss function, the initial reflection restoration module can be subjected to iterative optimization so as to obtain a trained reflection restoration module, and further, the trained image decomposition network module needs further optimized dark ray reflection light images to be subjected to detail restoration, so that the reflection light images under the effect of the normal light are close to.
For the fourth loss function:
a step of determining a fourth loss function between the second restored image and the label illumination image, comprising:
determining a regularization function between the tag illumination image and the second restored image, and determining a gradient loss function between the tag illumination image and the second restored image based on the regularization function;
and taking the weighted sum between the regularization function and the loss function as a fourth loss function.
Specifically, the L2 regularization function is used in the Loss4 function to measure the similarity between the illumination restoration image (second restoration image) and the label illumination image, namely:by design->The gradient Loss function allows the illumination restoration image and the tag illumination image to be as close as possible in texture, with more texture details, so the total Loss of Loss4 can be expressed as: />The weighting coefficients may be set according to actual needs, and are not limited herein.
Through the fourth loss function, the initial illumination adjustment module can be subjected to iterative optimization to obtain the trained illumination adjustment module, and further, the trained dark-ray illumination image required to be further optimized by the image decomposition network module is subjected to detail restoration, so that the illumination image under the effect of normal light is close to.
In addition, in order to make the model have stronger generalization capability, in the model training process of the above image decomposition network module, the reflection recovery module and the illumination adjustment module, image enhancement can be performed through means such as rotation, shearing, color transformation and the like, and model building can be performed by utilizing frames such as Tensorflow, pytorch, and model initialization, super-parameter rational design and data set loading can be performed on the model for training.
Further, a fourth embodiment of the dim light image optimization training method according to the present invention is provided based on the above embodiment of the dim light image optimization training method according to the present invention, and in this embodiment, after the step of optimizing the initial brightness adjustment curve according to the fifth loss function to obtain a trained brightness adjustment curve, the method includes:
inputting the dim light line image to be optimized to the trained image decomposition network module to obtain a reflected light image to be optimized and an illumination image to be optimized;
inputting the reflected light image to be optimized to the trained reflection restoration module to obtain a reflected restoration image to be optimized, and inputting the illumination image to be optimized to the trained illumination adjustment module to obtain an illumination restoration image to be optimized;
and inputting the reflection restoration image to be optimized and the illumination restoration image to be optimized into the brightness adjustment curve after training to obtain a target effect image.
In this embodiment, the trained image decomposition network module, the reflection restoration module and the illumination adjustment module in the above embodiment are used, if a user needs to perform post-optimization enhancement on an image shot in one or some dark light environments, the image decomposition of the image decomposition network module after training can be performed on the image to be optimized dark light line image, the reflection restoration module and the illumination adjustment module after training sequentially and simultaneously restore the image to be optimized obtained by decomposition into an image with clearer textures and details, and finally, the image to be optimized is finely adjusted by the brightness adjustment curve after training after dot multiplication of the image to be optimized and the image to be optimized after dot multiplication of the image to be optimized, and the finally output target effect image can have more details, fewer noises and more balanced true colors and contrast only by single fine adjustment.
In addition, the invention also provides an intelligent terminal, which comprises a memory, a processor and a dim light image optimization training program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the dim light image optimization training method according to the embodiment when executing the dim light image optimization training program.
The specific implementation manner of the intelligent terminal is basically the same as that of each embodiment of the dim light image optimization training method, and is not repeated here.
The present invention also proposes a computer-readable storage medium, characterized in that the computer-readable storage medium comprises a darkness image optimization training program, which when executed by a processor implements the steps of the darkness image optimization training method as described in the above embodiments.
The specific implementation of the readable storage medium of the present invention is basically the same as the above embodiments of the dim light image optimization training method, and will not be described herein.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a smart terminal, a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
In the present invention, the terms "first", "second", "third", "fourth", "fifth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, and the specific meaning of the above terms in the present invention will be understood by those of ordinary skill in the art depending on the specific circumstances.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, the scope of the present invention is not limited thereto, and it should be understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications and substitutions of the above embodiments may be made by those skilled in the art within the scope of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (7)

1. The dim light image optimization training method is characterized by comprising the following steps of:
inputting an image data set to be trained into an initial image decomposition network module to obtain an initial decomposition image set, wherein the image data set to be trained comprises a label image and a dark ray image; the initial decomposition image set comprises a tag reflected light image and a tag illumination image corresponding to the tag image, and a dim light reflected light image and a dim light illumination image corresponding to the dim light image;
determining a first loss function between the tag reflected light image and the tag illumination image and a second loss function between the dim light reflected light image and the dim light illumination image according to the initial decomposed image set;
inputting a dark light line image set in the initial decomposed image set to an initial restoration module to obtain an initial restoration image set, wherein the dark light line image set comprises the dark light reflection light image and the dark light illumination image; the initial restoration module comprises an initial reflection restoration module and an initial illumination adjustment module; the initial set of restored images includes a first restored image and a second restored image;
the step of inputting the dark light line image set in the initial decomposed image set to an initial restoration module to obtain an initial restored image set includes:
inputting the dark-ray reflected light image to the initial reflection restoration module to obtain the first restoration image, and inputting the dark-ray illumination image to the initial illumination adjustment module to obtain the second restoration image;
determining a third loss function between the first restored image and the tag reflected light image and a fourth loss function between the second restored image and the tag illumination image according to the initial restored image set and the initial decomposed image set;
obtaining a trained image decomposition network module according to the first loss function and the second loss function, and obtaining a trained recovery module according to a third loss function and the fourth loss function, wherein the trained recovery module comprises a trained reflection recovery module and a trained illumination adjustment module;
obtaining the trained reflection restoration module according to the third loss function, and obtaining the trained illumination adjustment module according to the fourth loss function;
after the step of obtaining the trained recovery module according to the third loss function and the fourth loss function, the method comprises the following steps:
inputting the first restored image and the second restored image to a preset initial brightness adjustment curve point for multiplying and reconstructing to obtain a reconstructed image corresponding to the dim light image;
determining a fifth loss function between the reconstructed image and the label image;
and optimizing the initial brightness adjustment curve according to the fifth loss function to obtain a trained brightness adjustment curve.
2. The dim light image optimization training method according to claim 1, wherein the step of determining a fifth loss function between the reconstructed image and the label image comprises:
determining a regularization function between the reconstructed image and the label image, and determining a structural similarity loss function between the reconstructed image and the label image, and determining a color loss function between the reconstructed image and the label image;
and taking a weighted sum among the regularization function, the structural similarity loss function and the color loss function as a fifth loss function.
3. The darkness image optimization training method according to claim 1, wherein the step of determining a first loss function between the tag reflected light image and the tag illumination image and determining a second loss function between the darkness reflected light image and the darkness illumination image comprises:
determining a first regularization function between the tag reflected light image and the tag illumination image, taking the first regularization function as a first loss function, determining a second regularization function between the dark light reflected light image and the dark light illumination image, and taking the second regularization function as a second loss function;
the step of determining a third loss function between the first restored image and the label reflected light image comprises:
determining a regularization function between the first restored image and the tag reflected light image, and determining a structural similarity loss function between the first restored image and the tag reflected light image;
and taking the weighted sum between the regularization function and the structural similarity loss function as a third loss function.
4. The dim light image optimization training method according to claim 1, wherein the step of determining a fourth loss function between the second restored image and the label illumination image comprises:
determining a regularization function between the tag illumination image and the second restored image, and determining a gradient loss function between the tag illumination image and the second restored image based on the regularization function;
and taking the weighted sum between the regularization function and the loss function as a fourth loss function.
5. The dim light image optimization training method according to any one of claims 1-4, characterized in that, after the step of optimizing the initial brightness adjustment curve according to the fifth loss function to obtain a trained brightness adjustment curve, it includes:
inputting the dim light line image to be optimized to the trained image decomposition network module to obtain a reflected light image to be optimized and an illumination image to be optimized;
inputting the reflected light image to be optimized to the trained reflection restoration module to obtain a reflected restoration image to be optimized, and inputting the illumination image to be optimized to the trained illumination adjustment module to obtain an illumination restoration image to be optimized;
and inputting the reflection restoration image to be optimized and the illumination restoration image to be optimized into the brightness adjustment curve after training to obtain a target effect image.
6. The intelligent terminal is characterized by comprising a memory, a processor and a dim light image optimization training program which is stored on the memory and can run on the processor, wherein: the darkness image optimization training program when executed by the processor implements the steps of the darkness image optimization training method according to any one of claims 1 to 5.
7. A computer readable storage medium, wherein a darkness image optimization training program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the darkness image optimization training method according to any one of claims 1 to 5.
CN202210536653.6A 2022-05-17 2022-05-17 Dim light image optimization training method, intelligent terminal and computer readable storage medium Active CN114998120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210536653.6A CN114998120B (en) 2022-05-17 2022-05-17 Dim light image optimization training method, intelligent terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210536653.6A CN114998120B (en) 2022-05-17 2022-05-17 Dim light image optimization training method, intelligent terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114998120A CN114998120A (en) 2022-09-02
CN114998120B true CN114998120B (en) 2024-01-12

Family

ID=83028013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210536653.6A Active CN114998120B (en) 2022-05-17 2022-05-17 Dim light image optimization training method, intelligent terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114998120B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899197A (en) * 2020-08-05 2020-11-06 广州市百果园信息技术有限公司 Image brightening and denoising method and device, mobile terminal and storage medium
CN111918095A (en) * 2020-08-05 2020-11-10 广州市百果园信息技术有限公司 Dim light enhancement method and device, mobile terminal and storage medium
CN112862713A (en) * 2021-02-02 2021-05-28 山东师范大学 Attention mechanism-based low-light image enhancement method and system
CN113344804A (en) * 2021-05-11 2021-09-03 湖北工业大学 Training method of low-light image enhancement model and low-light image enhancement method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899197A (en) * 2020-08-05 2020-11-06 广州市百果园信息技术有限公司 Image brightening and denoising method and device, mobile terminal and storage medium
CN111918095A (en) * 2020-08-05 2020-11-10 广州市百果园信息技术有限公司 Dim light enhancement method and device, mobile terminal and storage medium
CN112862713A (en) * 2021-02-02 2021-05-28 山东师范大学 Attention mechanism-based low-light image enhancement method and system
CN113344804A (en) * 2021-05-11 2021-09-03 湖北工业大学 Training method of low-light image enhancement model and low-light image enhancement method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘家宏.暗光环境下的图像增强方法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2022,第15-26页. *
暗光环境下的图像增强方法研究;刘家宏;《中国优秀硕士学位论文全文数据库 信息科技辑》;第15-26页 *

Also Published As

Publication number Publication date
CN114998120A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
CN112381897B (en) Low-illumination image enhancement method based on self-coding network structure
CN110610463A (en) Image enhancement method and device
Khan et al. Localization of radiance transformation for image dehazing in wavelet domain
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
US11663707B2 (en) Method and system for image enhancement
CN110717868A (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
WO2022133194A1 (en) Deep perceptual image enhancement
CN111047543A (en) Image enhancement method, device and storage medium
WO2023005818A1 (en) Noise image generation method and apparatus, electronic device, and storage medium
Liu et al. Learning hadamard-product-propagation for image dehazing and beyond
CN115526803A (en) Non-uniform illumination image enhancement method, system, storage medium and device
Dhara et al. Exposedness-based noise-suppressing low-light image enhancement
CN114757854A (en) Night vision image quality improving method, device and equipment based on multispectral analysis
Yuan et al. Locally and multiply distorted image quality assessment via multi-stage CNNs
KR102277005B1 (en) Low-Light Image Processing Method and Device Using Unsupervised Learning
CN112085668B (en) Image tone mapping method based on region self-adaptive self-supervision learning
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN117611501A (en) Low-illumination image enhancement method, device, equipment and readable storage medium
CN114998120B (en) Dim light image optimization training method, intelligent terminal and computer readable storage medium
CN116645305A (en) Low-light image enhancement method based on multi-attention mechanism and Retinex
Tade et al. Tone mapped high dynamic range image quality assessment techniques: survey and analysis
CN112102175A (en) Image contrast enhancement method and device, storage medium and electronic equipment
CN110489584B (en) Image classification method and system based on dense connection MobileNet model
Bae et al. Non-iterative tone mapping with high efficiency and robustness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant