CN113034417A - Image enhancement system and image enhancement method based on generation countermeasure network - Google Patents

Image enhancement system and image enhancement method based on generation countermeasure network Download PDF

Info

Publication number
CN113034417A
CN113034417A CN202110375243.3A CN202110375243A CN113034417A CN 113034417 A CN113034417 A CN 113034417A CN 202110375243 A CN202110375243 A CN 202110375243A CN 113034417 A CN113034417 A CN 113034417A
Authority
CN
China
Prior art keywords
image
original
module
night scene
normal light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110375243.3A
Other languages
Chinese (zh)
Inventor
朱宁波
王猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110375243.3A priority Critical patent/CN113034417A/en
Publication of CN113034417A publication Critical patent/CN113034417A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image enhancement system based on a generation countermeasure network, which comprises an original image acquisition module and a countermeasure network model. The confrontation network model comprises an image synthesis module, an image discrimination module and a multi-loss function. The image synthesis module outputs a synthesized image, the image discrimination module outputs an enhanced image, the multi-loss function senses image loss according to an output result of the image discrimination module, adjusts the synthesized image output by the image synthesis module and improves the enhancement effect, and the loss function comprises an antagonistic loss function, a cyclic consistency loss function, a perception loss function and a total loss function. The invention introduces countermeasures, attention mechanism, Unet, residual block network and perception loss, and applies a multi-scale synthesis module and an image discriminator to generate and discriminate the original night scene shot image, thereby enhancing the visual effect of the night scene image. Meanwhile, the invention also provides an image enhancement system and an image enhancement method adopting the image enhancement system.

Description

Image enhancement system and image enhancement method based on generation countermeasure network
Technical Field
The invention relates to the technical field of image enhancement, in particular to an image enhancement system and an image enhancement method based on a generation countermeasure network.
Background
The narrow night scene environment only refers to a night shooting scene in a night environment without sunlight. For those skilled in the art, the night scene environment includes low light, dim light and a narrow night scene environment according to the illumination visibility of the shooting environment, that is, all the shooting environments with abnormal illumination are called the night scene environment, and the low light, dim light and dim light environment is defined as a generalized night scene environment. Correspondingly, the normal light imaging environment in a narrow sense is a disposition environment in a normal sunlight irradiation environment. For those skilled in the art, a photographing environment with high illumination visibility is defined as a general normal light photographing environment, that is, a general normal light photographing environment, which includes not only a photographing environment in a normal sunlight irradiation environment.
In the present society, it is a difficult challenge to capture a clear image in a dark night or in a low-light, or low-light environment. Particularly, in a night scene environment, the display effect of the night scene images or videos shot by part of cameras, mobile phones and monitoring devices is not good and is often difficult to distinguish, and how to solve the technical problem that the display effect of the images shot in the low-light, weak-light and night scene environments is not good is an important research subject in the field of computer vision.
In the prior art, when an image is shot in a low-light, weak-light, low-light or night-scene environment, the brightness of the shot image can be enhanced by prolonging the exposure time and improving the sensitivity, but the noise of the image can be amplified, so that the display effect of the shot night-scene image is poor.
Further, the industry also has the problem that the display effect of the image shot in the night scene environment is improved in a software processing mode. For the night scene image processing after shooting, the existing method mainly comprises a traditional image enhancement algorithm and an image enhancement algorithm based on a deep neural network.
The traditional night scene image enhancement algorithm mainly comprises an upright square map equalization method and a frequency domain enhancement technical method, but the traditional night scene image enhancement algorithm needs to use priori knowledge for iterative training, so that the time complexity is high, and a secondary degradation phenomenon can be generated.
On the other hand, the image enhancement algorithm based on the deep neural network is also applied to the field of image enhancement, but the depth model designed in the method is too large, so that the time consumption is long, the enhanced image has the situations of color distortion, image sharpening and the like, and the overall improvement on the brightness, texture, noise, definition, contrast and color of the night scene image is not comprehensively considered in the aspect of image enhancement.
In view of the above, it is necessary to provide a new image enhancement generation system and generation method to effectively solve the above problems.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of poor night scene image display effect, color distortion and image sharpening in the prior art, and the invention provides an image enhancement generation system based on a generation countermeasure network, which achieves the purpose of night scene image enhancement through a countermeasure training method.
Meanwhile, the invention also provides an image enhancement method adopting the image enhancement generation system based on the generation countermeasure network.
An image enhancement system based on a generation countermeasure network comprises an original image acquisition module and a countermeasure network model, wherein the original image acquisition module generates an original shot image set, the original image shot set comprises a plurality of original shot image pairs, each original image pair comprises an original night scene shot image and an original normal light shot image, the original night scene shot image and the original normal light shot image of each original shot image pair are shot images formed aiming at the same shot object, the countermeasure network model comprises an image synthesis module, an image discrimination module and a multi-loss function, the image synthesis module receives the original shot images from the original image acquisition module and outputs synthesized images, the image discrimination module receives the synthesized images from the image synthesis module and outputs enhanced images, the multi-loss function senses image loss according to the output result of the image distinguishing module, adjusts the synthesized image output by the image synthesizing module and improves the enhancement effect, and the loss function comprises an antagonistic loss function, a cyclic consistency loss function, a sensing loss function and a total loss function.
Furthermore, the image enhancement system also comprises an attention feature generation module, wherein the attention feature generation module adopts an image attention mechanism network model to perform weighted summation on adjacent node features in the original night scene shot image and the original normal light shot image to generate a night scene attention feature map and a normal light attention feature map.
Further, the image synthesis module comprises a normal light image synthesis module, the normal light image synthesis module synthesizes the original night scene shooting image and the night scene attention feature map into a normal light synthesis image, and the normal light image discrimination module receives the normal light synthesis image and the original normal light shooting image and outputs an enhanced image.
Further, the image synthesis module further comprises a night scene image synthesis module, the night scene image synthesis module synthesizes the original normal light shot image and the normal light attention feature map into a night scene synthesis image, and the night scene image discrimination module receives the night scene synthesis image and the original night scene shot image and outputs an enhanced image.
Further, the network structure of image synthesis module adopts Unet and residual block network, the Unet network includes the coding block, decodes the piece, down samples and upsamples, original night scene is shot the image with night scene attention feature map passes through the convolution of different levels in the coding block, the sampling of maximum pooling learns the deep characteristic of original night scene shot the image, deep characteristic is through deconvolution upsampling and decoding block, then passes through convolution layer, residual block network and upsampling layer output composite image in proper order.
Furthermore, the image distinguishing module adds patch GAN, the synthesized image output from the image synthesizing module is input into a convolution network to obtain n x n characteristic diagram, n x n matrix is obtained through identification, and finally the result of the enhanced image is output, and the mean value of the matrix is used as a true/false result.
Further, the penalty function satisfies the following formula:
Figure BDA0003010136550000031
Figure BDA0003010136550000032
wherein: gABIs a normal light image synthesis module, X is an original night scene shooting image, GAB(x) Representing the original night scene shot image through the generator GABThe generated ordinary light composite image;
GBAis a night scene image synthesis module, y is an original normal light shot image, GBA(y) represents that the original normal light shooting image passes through the night scene image synthesis module GBAGenerating a synthetic night scene image;
DBthe normal light image distinguishing module is used for distinguishing the authenticity of the generated normal light synthetic image and the original normal light shot image;
DAthe night scene image distinguishing module is used for distinguishing the authenticity of the generated night scene composite image and the original night scene shooting image.
Further, the cyclic consistency loss function satisfies the following formula:
Figure BDA0003010136550000041
wherein: log is a logarithmic operation, and E represents the expected value of the distribution function.
Further, the perceptual loss function satisfies the following formula:
Figure BDA0003010136550000042
wherein: Φ j is a feature map of the j-th layer network generated by the convolutional network from the input picture, H represents the height of the feature map, and W represents the width of the feature map.
Further, the total loss function satisfies the following formula:
Figure BDA0003010136550000043
wherein:
LGAN(GAB,DB,X,Y),LGAN(GBA,DAy, X) represents the challenge loss;
Lcyc(GAB,GBAx, Y) represents a loss of cyclic consistency;
Lper(GAB,GBAx, Y) represents a loss of perception;
λ cyc is the coefficient of the cyclic consistency loss and λ per is the coefficient of the perceptual loss.
An image enhancement method based on a generative confrontation network, comprising the steps of:
providing an original image acquisition module to generate an original shot image set, wherein the original image shot set comprises a plurality of original shot image pairs, each original image pair comprises an original night scene shot image and an original normal light shot image, and the original night scene shot image and the original normal light shot image of each original shot image pair are shot images formed for the same shot object;
providing an antagonistic network model, wherein the antagonistic network model comprises an image synthesis module, an image distinguishing module and a multi-loss function, and the image synthesis module receives an original shot image from the original image acquisition module and outputs a synthesized image; the image distinguishing module receives the synthesized image from the image synthesizing module and outputs an enhanced image; the multi-loss function senses image loss according to the output result of the image distinguishing module and adjusts the synthesized image output by the image synthesizing module, so that the enhancement effect is improved.
Further, the multiple loss functions include a penalty loss function, a round robin consistency loss function, a perceptual loss function, and a total loss function, wherein the penalty loss function satisfies the following equation:
the formula I is as follows:
Figure BDA0003010136550000051
the formula II is as follows:
Figure BDA0003010136550000052
and (3) showing three:
Figure BDA0003010136550000053
the formula four is as follows:
Figure BDA0003010136550000054
the formula five is as follows:
L(GAB,GBA,DA,DB,X,Y)=LGAN(GAB,DB,X,Y)+LGAN(GBA,DA,Y,X)
cycLcyc(GAB,GBA,X,Y)+λperLper(GAB,GBA,X,Y)
wherein: gABIs a normal light image synthesis module, X is an original night scene shooting image, GAB(x) Representing the original night scene shot image through the generator GABThe generated ordinary light composite image;
GBAis a night scene image synthesis module, y is an original normal light shot image, GBA(y) represents that the original normal light shooting image passes through the night scene image synthesis module GBAGenerating a synthetic night scene image;
DBthe normal light image distinguishing module is used for distinguishing the authenticity of the generated normal light synthetic image and the original normal light shot image;
DAthe night scene image distinguishing module is used for distinguishing the authenticity of the generated night scene composite image and the original night scene shooting image;
log is a logarithmic operation, E represents a distribution function expected value;
phi j refers to a feature map of a j-th layer network generated by the convolutional network according to the input picture, H represents the height of the feature map, and W represents the width of the feature map;
LGAN(GAB,DB,X,Y),LGAN(GBA,DAy, X) represents the challenge loss;
Lcyc(GAB,GBAx, Y) represents a loss of cyclic consistency;
Lper(GAB,GBAx, Y) represents a loss of perception;
λ cyc is the coefficient of the cyclic consistency loss and λ per is the coefficient of the perceptual loss.
Compared with the prior art, the image enhancement system based on the generation countermeasure network is additionally provided with the ordinary light image distinguishing module, and the ordinary light image distinguishing module adopts the patch GAN to ensure that all local areas of the enhanced image look like real natural light, which is important for avoiding local overexposure or low exposure.
Meanwhile, a multi-loss function is set in the confrontation network model, the multi-loss function is applied to better reduce color information, the defects that the generated picture has excessively smooth texture characteristics and lacks high-frequency information are overcome, and the normal light image generated by the confrontation network model again has good reduction effect no matter the pixel value of the bottom layer characteristic or the high-level abstract characteristic due to the perception loss.
Furthermore, an attention mechanism is introduced into the image enhancement system based on the generation countermeasure network, and the Unet and the residual block network are guided to serve as an image synthesis module through the attention mechanism to guide a training process and maintain the texture and the structure of the original image.
And finally, the image synthesis module adopts the Unet and the residual block network, so that more information of the original image can be borne, and the normal light image can be better generated.
Drawings
FIG. 1 is a block diagram of an image enhancement system based on a generative confrontation network according to the present invention;
FIG. 2 is a schematic diagram of the operation of one embodiment of the image enhancement system of FIG. 1;
FIG. 3 is a schematic diagram of a network structure of the image composition module shown in FIG. 1;
FIG. 4 is a block diagram of the image determination module shown in FIG. 1;
FIG. 5 is a graph of the effect of the image enhancement experiment of the present invention;
fig. 6 is a schematic diagram of the operation of another embodiment of the image enhancement system based on the generation countermeasure network of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and fig. 2, fig. 1 is a block diagram of an image enhancement system based on a generation countermeasure network according to the present invention, and fig. 2 is a schematic diagram of the image enhancement system shown in fig. 1. The image enhancement system 10 includes an original image acquisition module 11, an attention feature map generation module 12, and an confrontation network model 20. The confrontation network model 20 performs an optimization training test on the original shot images 111 and 113 and the attention feature images 121 and 123 from the image acquisition module 11 and the attention feature map generation module 12 to generate an enhanced image.
The image acquisition module 11 acquires an original night view captured image 111 set and an original normal light captured image 113 set of a target captured object in a real scene, where the original night view captured image 111 and the original normal light captured image 113 are captured image pairs of the same captured object in different environments, that is, each original night view captured image 111 corresponds to one original normal light captured image 113, and the original night view captured image 111 and the original normal light captured image 113 are original captured image pairs of the same captured object in different capturing environments, and are in one-to-one correspondence. The image acquisition module 11 acquires a plurality of original image shooting pairs.
Specifically, in the present embodiment, a user uses a camera to capture an original night view captured image 111 in a night view environment for the same subject to be captured; meanwhile, the original normal light shot image 113 is obtained by shooting the same subject in a normal light environment with the same camera. The same photographic subject may be the same photographic subject or the same photographic scene.
Meanwhile, the Attention feature map generating module 12 uses a Graph Attention Network model (GAT) to perform weighted summation on the features of the neighboring nodes in the original night scene captured image 111 and the original ordinary light captured image 113, so as to generate a night scene Attention feature map 121 and an ordinary light Attention feature map 123. In the attention feature map generation module 12, the weights for the neighboring node features in the original captured images 111 and 113 are completely dependent on the node features, independent of the map structure. The night scene attention feature map 121 is generated by weighted summation of features of adjacent nodes of an attention feature map generation module 111 corresponding to the original night scene shot image 111; the ordinary light attention feature map 123 is generated by weighted summation of features of adjacent nodes of the attention feature map generation module 12 corresponding to the original ordinary light shot image 113.
The confrontation network model 20 trains and tests the original shot image and generates an optimized enhanced image, wherein the enhanced image is displayed more clearly, and the technical problem of poor definition in low-light, weak-light, low-light and night scene environments is solved. The countermeasure network model 20 includes an image composition module 23, an image discrimination module 25, and a loss function (not shown).
The image synthesizing module 23 includes a night scene image synthesizing module 233 and a normal light image synthesizing module 231.
The ordinary light image synthesizing module 231 receives the original night scene captured image 111 input from the original image capturing module 11 and the night scene attention feature map 121 input from the attention feature map generating module 12, and correspondingly outputs a synthesized ordinary light image 2310.
The night-scene image synthesizing module 233 receives the original normal-light captured image 113 input from the original image capturing module 11 and the normal-light attention feature map 123 input from the attention feature map generating module 12, and outputs a synthesized night-scene image 2330.
Please refer to fig. 3, which is a schematic diagram of a network structure of the image synthesis module shown in fig. 1. The network structure of the image synthesis module 23 adopts a net and a residual block network. The Unet network comprises a coding block, a decoding block, a down-sampling and an up-sampling, the original night scene shot image 111 and the night scene attention feature map 121 are subjected to convolution of different levels in the coding block and down-sampling of maximum pooling, deep features of the original night scene shot image 111 are learned, the deep features are subjected to deconvolution up-sampling and decoding blocks to obtain a feature map with the same size as an input image, and a composite image 2310 and 2330 is output through a convolution layer, a residual block network and an up-sampling layer. The Unet network has the advantages that features of different layers can be captured and integrated in a feature superposition mode, so that more information of original shot images can be borne, and a normally synthetic image can be generated better. The Unet network and the residual block network are used as the network structure of the image synthesis module 23, and the network structure of the ordinary light image synthesis module 231 inputs the original night view captured image 111 and the night view attention feature map 121 to generate a synthesized ordinary light image 2310. Correspondingly, the Unet network and the residual block network are used as the network structure of the image synthesis module 23, and the network structure input of the ordinary light image synthesis module 233 is the ordinary light attention feature map 123 of the original ordinary light photographed image 1113, so as to generate the synthesized night view image 2330.
The image discrimination module 25 discriminates the synthesized image from the original shot image, and then directs the image synthesis module 23 to train to obtain a more realistic image, thereby ensuring that all local regions of the enhanced image are closer to a realistic natural light.
As shown in fig. 4, the image discriminating module 25 adds a patch GAN, first inputs the synthesized images 2310 and 2330 output from the image synthesizing module 23 into a convolution network to obtain an n × n feature map, each pixel point on the feature map represents a sensing field of the original photographed images 111 and 113, each pixel point is input into the image discriminating module 25 for discrimination to obtain an n × n matrix, and finally outputs the mean value of the matrix as the output of True/falls.
The image determination module 25 includes a normal light image determination module 251 and a night scene image determination module 253. The ordinary light image determination module 251 receives the ordinary light synthesized image 2310 from the ordinary light image synthesis module 231 and the original ordinary light photographed image 113, obtains an n × n matrix through identification, and finally outputs an ordinary light image enhancement result, wherein the result takes the mean value of the matrix as a true/false result.
Similarly, the night scene image determination module 253 receives the night scene composite image 2310 from the night scene image composition module 233 and the original night scene photographed image 111, obtains an n × n matrix through identification, and finally outputs an enhanced image result, wherein the result takes the mean value of the matrix as a true/false result
The loss function is intended to adjust the image synthesis module 23 to generate a target image closer to the shot, thereby improving the image quality. The loss functions include a countering loss function, a round robin consistency loss function, a perceptual loss function, and a total loss function.
Wherein the penalty function satisfies the following equation:
the formula I is as follows:
Figure BDA0003010136550000101
the formula II is as follows:
Figure BDA0003010136550000102
wherein: gABIs a normal light image synthesis module, X is an original night scene shooting image, GAB(x) Representing the original night scene shot image through the generator GABThe generated ordinary light composite image;
GBAis a night scene image synthesis module, y isOriginal normal light shot image, GBA(y) represents that the original normal light shooting image passes through the night scene image synthesis module GBAGenerating a synthetic night scene image;
DBthe normal light image distinguishing module is used for distinguishing the authenticity of the generated normal light synthetic image and the original normal light shot image;
DAthe night scene image distinguishing module is used for distinguishing the authenticity of the generated night scene composite image and the original night scene shooting image.
Further, the cyclic consistency loss function satisfies the following formula:
and (3) showing three:
Figure BDA0003010136550000103
wherein: log is a logarithmic operation, and E represents the expected value of the distribution function. In order to convert the original night-view captured image into the normal-light image, in the process, the night-view image synthesis module 233 and the normal-light image synthesis module 231 need to respectively convert the original night-view captured image and the original normal-light captured image into each other, and after the X picture is converted into the Y space, the X picture should be converted back, so that the countermeasure model 20 is prevented from converting all the X pictures into the same picture in the Y space.
Further, the perceptual loss function satisfies the following formula:
the formula four is as follows:
Figure BDA0003010136550000104
wherein: Φ j is a feature map of the j-th layer network generated by the convolutional network from the input picture, H represents the height of the feature map, and W represents the width of the feature map. The introduction of the perception loss function overcomes the defects that the generated picture has excessive smooth texture characteristics and lacks high-frequency information, and the perception loss enables the regenerated ordinary-light image to have good restoration effect no matter the normal-light image is a bottom layer characteristic pixel value or a high-layer abstract characteristic.
Further, the total loss function satisfies the following formula:
Figure BDA0003010136550000111
wherein:
LGAN(GAB,DB,X,Y),LGAN(GBA,DAy, X) represents the challenge loss;
Lcyc(GAB,GBAx, Y) represents a loss of cyclic consistency;
Lper(GAB,GBAx, Y) represents a loss of perception;
λ cyc is the coefficient of the cyclic consistency loss and λ per is the coefficient of the perceptual loss.
According to the above loss function of each network portion, a coefficient is added to the cyclic consistency loss and the perceptual loss, respectively, λ cyc is the coefficient of the cyclic consistency loss, and λ per is the coefficient of the perceptual loss. The above mentioned loss functions together constitute the overall loss function of the countermeasure model.
Please refer to fig. 5, which is a diagram illustrating the effect of the image enhancement experiment of the image enhancement system 10 based on the generation countermeasure network according to the present invention. The leftmost part is the original night scene shot image originally captured by the camera, the right part is the normal light image corresponding to the captured image, and the middle part is the composite image generated by the countermeasure model 20 according to the original night scene shot image.
Referring to fig. 6, a schematic diagram of an image enhancement system based on a generative countermeasure network according to another embodiment of the present invention is shown. The difference from the previous embodiment is that: the night scene composite image 4330 is input into the normal light image composite module 431 to generate a normal light reverse composite image 471, and the normal light reverse composite image 471 and the original normal light photographed image 313 calculate the cycle consistency and the perception loss of the normal light image.
On the other hand, the normal light synthesized image 4310 is input to the night scene image synthesizing module 433 to generate a night scene reverse synthesized image 473, and the night scene reverse synthesized image 473 and the original night scene captured image 311 calculate a cyclic consistency loss and a perceptual loss of the night scene image.
Compared with the prior art, in the image enhancement system 10 based on the generation countermeasure network, a normal light image distinguishing module and a night scene image distinguishing module are added, and the normal light image distinguishing module and the night scene image distinguishing module adopt patch GAN to ensure that all local areas of the enhanced image look like real natural light, which is important for avoiding local over-exposure or low exposure.
Meanwhile, a multi-loss function is set in the confrontation network model, the multi-loss function is applied to better reduce color information, the defects that the generated picture has excessively smooth texture characteristics and lacks high-frequency information are overcome, and the normal light image generated by the confrontation network model again has good reduction effect no matter the pixel value of the bottom layer characteristic or the high-level abstract characteristic due to the perception loss.
Furthermore, an attention mechanism is introduced into the image enhancement system based on the generation countermeasure network, and the Unet and the residual block network are guided to serve as an image synthesis module through the attention mechanism to guide a training process and maintain the texture and the structure of the original image.
And finally, the image synthesis module adopts the Unet and the residual block network, so that more information of the original image can be borne, and the normal light image can be better generated.
The invention also provides an image enhancement method based on the generation countermeasure network, which comprises the following steps:
providing an original image acquisition module to generate an original shot image set, wherein the original image shot set comprises a plurality of original shot image pairs, each original image pair comprises an original night scene shot image and an original normal light shot image, and the original night scene shot image and the original normal light shot image of each original shot image pair are shot images formed for the same shot object;
providing an antagonistic network model, wherein the antagonistic network model comprises an image synthesis module, an image distinguishing module and a multi-loss function, and the image synthesis module receives an original shot image from the original image acquisition module and outputs a synthesized image; the image distinguishing module receives the synthesized image from the image synthesizing module and outputs an enhanced image; the multi-loss function senses image loss according to the output result of the image distinguishing module and adjusts the synthesized image output by the image synthesizing module, so that the enhancement effect is improved.
Further, the multiple loss functions include a penalty loss function, a round robin consistency loss function, a perceptual loss function, and a total loss function, wherein the penalty loss function satisfies the following equation:
the formula I is as follows:
Figure BDA0003010136550000121
the formula II is as follows:
Figure BDA0003010136550000131
and (3) showing three:
Figure BDA0003010136550000132
the formula four is as follows:
Figure BDA0003010136550000133
the formula five is as follows:
L(GAB,GBA,DA,DB,X,Y)=LGAN(GAB,DB,X,Y)+LGAN(GBA,DA,Y,X)
cycLcyc(GAB,GBA,X,Y)+λperLper(GAB,GBA,X,Y)
wherein: gABIs a normal light image synthesis module, X is an original night scene shooting image, GAB(x) Representing the originalNight scene shooting image passing generator GABThe generated ordinary light composite image;
GBAis a night scene image synthesis module, y is an original normal light shot image, GBA(y) represents that the original normal light shooting image passes through the night scene image synthesis module GBAGenerating a synthetic night scene image;
DBthe normal light image distinguishing module is used for distinguishing the authenticity of the generated normal light synthetic image and the original normal light shot image;
DAthe night scene image distinguishing module is used for distinguishing the authenticity of the generated night scene composite image and the original night scene shooting image;
log is a logarithmic operation, E represents a distribution function expected value;
phi j refers to a feature map of a j-th layer network generated by the convolutional network according to the input picture, H represents the height of the feature map, and W represents the width of the feature map;
LGAN(GAB,DB,X,Y),LGAN(GBA,DAy, X) represents the challenge loss;
Lcyc(GAB,GBAx, Y) represents a loss of cyclic consistency;
Lper(GAB,GBAx, Y) represents a loss of perception;
λ cyc is the coefficient of the cyclic consistency loss and λ per is the coefficient of the perceptual loss.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. An image enhancement system based on a generative confrontation network, comprising:
the original image acquisition module generates an original shooting image set, the original shooting image set comprises a plurality of original shooting image pairs, each original image pair comprises an original night scene shooting image and an original normal light shooting image, and the original night scene shooting image and the original normal light shooting image of each original shooting image pair are shooting images formed aiming at the same shooting object;
an antagonistic network model comprising:
the image synthesis module receives the original shot image from the original image acquisition module and outputs a synthesized image;
the image distinguishing module receives the synthesized image from the image synthesizing module and outputs an enhanced image;
the multi-loss function senses image loss according to the output result of the image distinguishing module, adjusts the synthesized image output by the image synthesizing module, and improves the enhancement effect, and is characterized in that: the loss functions include a countering loss function, a round robin consistency loss function, a perceptual loss function, and a total loss function.
2. The image enhancement system based on the generation countermeasure network of claim 1, further comprising an attention feature generation module, wherein the attention feature generation module uses an attention mechanism network model to perform weighted summation on the adjacent node features in the original night view captured image and the original ordinary light captured image to generate a night view attention feature map and an ordinary light attention feature map.
3. The image enhancement system based on the generative countermeasure network of claim 2, wherein the image synthesis module comprises a normal light image synthesis module, the normal light image synthesis module synthesizes the original night view captured image and the night view attention feature map into a normal light synthesized image, and the normal light image discrimination module receives the normal light synthesized image and the original normal light captured image and outputs an enhanced image.
4. The image enhancement system based on the spanning countermeasure network of claim 2 or 3, wherein the image composition module further comprises a night view image composition module, the night view image composition module combines the original normal light captured image and the normal light attention feature map into a night view composite image, and the night view image discrimination module receives the night view composite image and the original night view captured image and outputs an enhanced image.
5. The generation-based countermeasure network image enhancement system of claim 2, wherein the network structure of the image composition module employs a Unet and a residual block network.
6. The generated countermeasure network-based image enhancement system of claim 5, wherein the Unet network comprises a coding block, a decoding block, a down-sampling and an up-sampling, the original night scene captured image and the night scene attention feature map are subjected to convolution at different levels in the coding block, sampling with maximum pooling, learning deep features of the original night scene captured image, the deep features are subjected to deconvolution up-sampling and decoding block, and then a composite image is output sequentially through a convolutional layer, a residual block network and an up-sampling layer.
7. The system of claim 1, wherein the image recognition module adds patch GAN, inputs the synthesized image from the image synthesis module into the convolution network to obtain n x n characteristic map, identifies n x n matrix, and outputs the enhanced image result with the mean value of the matrix as true/false result.
8. The image enhancement system based on generation of a confrontation network according to claim 4, characterized in that the confrontation loss function satisfies the following formula:
Figure FDA0003010136540000021
Figure FDA0003010136540000022
wherein: gABIs a normal light image synthesis module, X is an original night scene shooting image, GAB(x) Representing the original night scene shot image through the generator GABThe generated ordinary light composite image;
GBAis a night scene image synthesis module, y is an original normal light shot image, GBA(y) represents that the original normal light shooting image passes through the night scene image synthesis module GBAGenerating a synthetic night scene image;
DBthe normal light image distinguishing module is used for distinguishing the authenticity of the generated normal light synthetic image and the original normal light shot image;
DAthe night scene image distinguishing module is used for distinguishing the authenticity of the generated night scene composite image and the original night scene shooting image.
9. The generative warrior network-based image enhancement system of claim 8, wherein the cyclical consistency loss function satisfies the following equation:
Figure FDA0003010136540000031
wherein: log is a logarithmic operation, and E represents the expected value of the distribution function.
10. The generative warrior network-based image enhancement system according to claim 9, wherein the perceptual loss function satisfies the following formula:
Figure FDA0003010136540000032
wherein: Φ j is a feature map of the j-th layer network generated by the convolutional network from the input picture, H represents the height of the feature map, and W represents the width of the feature map.
11. The generative countermeasure network-based image enhancement system of claim 10, wherein the total loss function satisfies the following equation:
L(GAB,GBA,DA,DB,X,Y)=LGAN(GAB,DB,X,Y)+LGAN(GBA,DA,Y,X)+λcycLcyc(GAB,GBA,X,Y)+λperLper(GAB,GBAx, Y), wherein:
LGAN(GAB,DB,X,Y),LGAN(GBA,DAy, X) represents the challenge loss;
Lcyc(GAB,GBAx, Y) represents a loss of cyclic consistency;
Lper(GAB,GBAx, Y) represents a loss of perception;
λ cyc is the coefficient of the cyclic consistency loss and λ per is the coefficient of the perceptual loss.
12. An image enhancement method based on a generative confrontation network, comprising the steps of:
providing an original image acquisition module to generate an original shot image set, wherein the original image shot set comprises a plurality of original shot image pairs, each original image pair comprises an original night scene shot image and an original normal light shot image, and the original night scene shot image and the original normal light shot image of each original shot image pair are shot images formed for the same shot object;
providing an antagonistic network model, wherein the antagonistic network model comprises an image synthesis module, an image distinguishing module and a multi-loss function, and the image synthesis module receives an original shot image from the original image acquisition module and outputs a synthesized image; the image distinguishing module receives the synthesized image from the image synthesizing module and outputs an enhanced image; the multi-loss function senses image loss according to the output result of the image distinguishing module and adjusts the synthesized image output by the image synthesizing module, so that the enhancement effect is improved.
13. The method of image enhancement based on generation of a countermeasure network of claim 12, wherein the multiple loss functions include a countermeasure loss function, a round robin consistency loss function, a perceptual loss function, and a total loss function, wherein the penalty function satisfies the following formula:
the formula I is as follows:
Figure FDA0003010136540000041
the formula II is as follows:
Figure FDA0003010136540000042
and (3) showing three:
Figure FDA0003010136540000043
the formula four is as follows:
Figure FDA0003010136540000044
the formula five is as follows:
L(GAB,GBA,DA,DB,X,Y)=LGAN(GAB,DB,X,Y)+LGAN(GBA,DA,Y,X)+λcycLcyc(GAB,GBA,X,Y)+λperLper(GAB,GBA,X,Y)
wherein: gABIs a normal light image synthesis module, X is an original night scene shooting image, GAB(x) Representing the original night scene shot image through the generator GABThe generated ordinary light composite image;
GBAis a night scene image synthesis module, y is an original normal light shot image, GBA(y) represents that the original normal light shooting image passes through the night scene image synthesis module GBAGenerating a synthetic night scene image;
DBthe normal light image distinguishing module is used for distinguishing the authenticity of the generated normal light synthetic image and the original normal light shot image;
DAthe night scene image distinguishing module is used for distinguishing the authenticity of the generated night scene composite image and the original night scene shooting image;
log is a logarithmic operation, E represents a distribution function expected value;
phi j refers to a feature map of a j-th layer network generated by the convolutional network according to the input picture, H represents the height of the feature map, and W represents the width of the feature map;
LGAN(GAB,DB,X,Y),LGAN(GBA,DAy, X) represents the challenge loss;
Lcyc(GAB,GBAx, Y) represents a loss of cyclic consistency;
Lper(GAB,GBAx, Y) represents a loss of perception;
λ cyc is the coefficient of the cyclic consistency loss and λ per is the coefficient of the perceptual loss.
CN202110375243.3A 2021-04-07 2021-04-07 Image enhancement system and image enhancement method based on generation countermeasure network Withdrawn CN113034417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110375243.3A CN113034417A (en) 2021-04-07 2021-04-07 Image enhancement system and image enhancement method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110375243.3A CN113034417A (en) 2021-04-07 2021-04-07 Image enhancement system and image enhancement method based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN113034417A true CN113034417A (en) 2021-06-25

Family

ID=76454074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110375243.3A Withdrawn CN113034417A (en) 2021-04-07 2021-04-07 Image enhancement system and image enhancement method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN113034417A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592752A (en) * 2021-07-12 2021-11-02 四川大学 Road traffic optical stain image enhancement method and device based on countermeasure network
CN113888443A (en) * 2021-10-21 2022-01-04 福州大学 Sing concert shooting method based on adaptive layer instance normalization GAN
CN115375544A (en) * 2022-08-08 2022-11-22 中加健康工程研究院(合肥)有限公司 Super-resolution method for generating countermeasure network based on attention and UNet network
CN115879516A (en) * 2023-03-02 2023-03-31 南昌大学 Data evidence obtaining method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592752A (en) * 2021-07-12 2021-11-02 四川大学 Road traffic optical stain image enhancement method and device based on countermeasure network
CN113888443A (en) * 2021-10-21 2022-01-04 福州大学 Sing concert shooting method based on adaptive layer instance normalization GAN
CN115375544A (en) * 2022-08-08 2022-11-22 中加健康工程研究院(合肥)有限公司 Super-resolution method for generating countermeasure network based on attention and UNet network
CN115879516A (en) * 2023-03-02 2023-03-31 南昌大学 Data evidence obtaining method

Similar Documents

Publication Publication Date Title
Lee et al. Deep chain hdri: Reconstructing a high dynamic range image from a single low dynamic range image
US8811733B2 (en) Method of chromatic classification of pixels and method of adaptive enhancement of a color image
CN113034417A (en) Image enhancement system and image enhancement method based on generation countermeasure network
WO2022021999A1 (en) Image processing method and image processing apparatus
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
Lee et al. Image contrast enhancement using classified virtual exposure image fusion
CN110097106A (en) The low-light-level imaging algorithm and device of U-net network based on deep learning
CN111242868B (en) Image enhancement method based on convolutional neural network in scotopic vision environment
CN111915525A (en) Low-illumination image enhancement method based on improved depth separable generation countermeasure network
WO2023086194A1 (en) High dynamic range view synthesis from noisy raw images
CN112561813B (en) Face image enhancement method and device, electronic equipment and storage medium
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN108564558A (en) Wide dynamic images processing method, device, equipment and storage medium
Li et al. Identifying photorealistic computer graphics using second-order difference statistics
Singh et al. Weighted least squares based detail enhanced exposure fusion
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
CN111815529B (en) Low-quality image classification enhancement method based on model fusion and data enhancement
Agrawal et al. A joint cumulative distribution function and gradient fusion based method for dehazing of long shot hazy images
CN117011181A (en) Classification-guided unmanned aerial vehicle imaging dense fog removal method
JP4359662B2 (en) Color image exposure compensation method
CN116385298A (en) No-reference enhancement method for night image acquisition of unmanned aerial vehicle
Anitha et al. Quality assessment of resultant images after processing
JP5050141B2 (en) Color image exposure evaluation method
Sahib et al. Deep learning for image forgery classification based on modified Xception net and dense net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210625

WW01 Invention patent application withdrawn after publication