CN115953317A - Image enhancement method and device, electronic equipment and storage medium - Google Patents

Image enhancement method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115953317A
CN115953317A CN202211740347.0A CN202211740347A CN115953317A CN 115953317 A CN115953317 A CN 115953317A CN 202211740347 A CN202211740347 A CN 202211740347A CN 115953317 A CN115953317 A CN 115953317A
Authority
CN
China
Prior art keywords
image
sample
feature map
loss
countermeasure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211740347.0A
Other languages
Chinese (zh)
Inventor
田静
方明
刘鹏
陈霆
王洪源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Kexun Information Technology Co ltd
Original Assignee
Shandong Kexun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Kexun Information Technology Co ltd filed Critical Shandong Kexun Information Technology Co ltd
Priority to CN202211740347.0A priority Critical patent/CN115953317A/en
Publication of CN115953317A publication Critical patent/CN115953317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image enhancement method, an image enhancement device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image; inputting the target image into an image enhancement model to obtain an enhanced image of the target image; the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on the first sample image and the second sample image in the first sample image set and a sample enhanced image of the first sample image output by the generator in the countermeasure network; the countermeasure loss is obtained by performing countermeasure learning based on the sample enhanced image and the third sample image in the second sample image set. The invention not only ensures that the image quality of the enhanced image reaches the high quality standard, but also ensures that the enhanced image has strong correlation with the image content of the original image so as to accurately and efficiently enhance the image.

Description

Image enhancement method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image enhancement method and apparatus, an electronic device, and a storage medium.
Background
In recent years, hardware imaging apparatuses have been developed rapidly, and high-quality images obtained by the hardware imaging apparatuses can contain more information. However, due to the complexity of the imaging environment, images directly obtained by the imaging device tend to be degraded. Illustratively, too low an exposure may produce a low illumination image, undesirable scattering may cause image blurring, errors in the device may cause image noise, and the like. Therefore, how to provide an image enhancement method with strong practicability and high generalization capability has important research significance.
In the prior art, deep learning has made dramatic progress in various image processing tasks. In general, a deep learning model is supervised-trained based on paired image sample pairs (that is, images have consistent content but different image qualities), so as to learn a mapping relationship between a low-quality sample image and a high-quality sample image in the image sample pairs by constructing a complex nonlinear network, thereby implementing image enhancement. However, due to the complexity of the imaging environment, it is difficult to acquire pairs of supervised data (i.e., pairs of image samples), which in turn leads to insufficient training data. Therefore, a deep learning model that can accurately enhance an image cannot be obtained, and it is difficult to accurately and efficiently enhance an image.
Disclosure of Invention
The invention provides an image enhancement method, an image enhancement device, electronic equipment and a storage medium, which are used for solving the defects that due to the complexity of an imaging environment, paired supervision data is difficult to acquire, training data is insufficient, and image enhancement is difficult to accurately and efficiently realize in the prior art.
The invention provides an image enhancement method, which comprises the following steps:
acquiring a target image;
inputting the target image into an image enhancement model to obtain an enhanced image of the target image;
the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
According to the image enhancement method provided by the invention, the image enhancement model is obtained by training based on the following steps:
inputting the first sample image, the second sample image and the sample enhanced image into a feature extraction model to obtain a feature map of the first sample image, a feature map of the second sample image and a feature map of the sample enhanced image;
according to the feature map of the first sample image, the feature map of the second sample image and the feature map of the sample enhanced image, performing contrast learning to obtain the contrast loss;
inputting the sample enhanced image and the third sample image into the discriminator for generating the confrontation network, and performing confrontation learning to obtain the confrontation loss;
performing countermeasure training on the parameter iteration for generating the countermeasure network according to the countermeasure loss and the contrast loss;
and constructing the image enhancement model according to the generator in the trained generation countermeasure network. According to an image enhancement method provided by the present invention, the obtaining of the contrast loss by performing contrast learning according to the feature map of the first sample image, the feature map of the second sample image, and the feature map of the sample enhanced image includes:
acquiring a first similarity distance between the feature map of the first sample image and the feature map of the sample enhanced image, and a second similarity distance between the feature map of the second sample image and the feature map of the sample enhanced image;
determining the contrast loss according to the first similarity distance and the second similarity distance; the contrast loss is targeted at the minimum of the first similarity distance and at the maximum of the second similarity distance.
According to an image enhancement method provided by the present invention, the inputting the sample enhanced image and the third sample image into the discriminator for generating a countermeasure network, and performing countermeasure learning to obtain the countermeasure loss includes:
inputting the sample enhanced image and the third sample image into the discriminator to obtain a discrimination result of the sample enhanced image and a discrimination result of the third sample image;
and determining the countermeasure loss according to the discrimination result of the sample enhanced image and the discrimination result of the third sample image.
According to the image enhancement method provided by the invention, the iterative countermeasure training of the parameters for generating the countermeasure network according to the countermeasure loss and the contrast loss comprises the following steps:
for the current countermeasure training, fixing the parameters of the discriminator obtained by the last countermeasure training, taking the fusion result between the countermeasure loss and the contrast loss as a target, and training the parameters of the generator obtained by the last countermeasure training to obtain a generator corresponding to the current countermeasure training;
fixing a generator corresponding to the current countermeasure training, taking the maximization of the countermeasure loss as an optimization target, and training the parameters of the discriminators obtained by the last countermeasure training to obtain discriminators corresponding to the current countermeasure training;
and iteratively executing next confrontation training based on the discriminator corresponding to the current confrontation training and the generator corresponding to the current confrontation training until the generated confrontation network meets a preset termination condition.
According to an image enhancement method provided by the invention, the image enhancement model comprises an encoder, an attention module, a residual error module and a decoder;
the inputting the target image into an image enhancement model to obtain an enhanced image of the target image includes:
inputting the target image into the encoder to perform downsampling to obtain a first feature map of the target image;
inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image;
inputting the second feature map into the residual error module to perform residual error operation to obtain a third feature map of the target image;
and inputting the third feature map into the decoder for up-sampling to obtain an enhanced image of the target image.
According to an image enhancement method provided by the invention, the attention module comprises a channel attention unit and a space attention unit;
the inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image includes:
inputting the first feature map into the channel attention unit, calculating channel weights of the first feature map in channel dimensions, and adjusting the feature sub map in each channel dimension in the first feature map according to the channel weights;
inputting the adjusted first feature map into the spatial attention unit, calculating spatial position weight of the adjusted first feature map in spatial dimension, and adjusting the feature sub-map in each spatial dimension in the adjusted first feature map according to the spatial position weight to obtain the second feature map.
The present invention also provides an image enhancement apparatus comprising:
an acquisition unit configured to acquire a target image;
the image enhancement unit is used for inputting the target image into an image enhancement model to obtain an enhanced image of the target image;
the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the antagonistic network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
The present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the image enhancement method as described in any of the above when executing the program.
The invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image enhancement method as described in any of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the image enhancement method as described in any one of the above.
According to the image enhancement method, the image enhancement device, the electronic equipment and the storage medium, the image enhancement model is subjected to unsupervised training through combining the contrast loss and the countermeasure loss so as to perform image enhancement on the target image according to the image enhancement model, on one hand, paired supervised data do not need to be acquired additionally, the influence caused by the lack of the paired supervised data is effectively reduced, and the training efficiency and the training accuracy are higher; on the other hand, the image quality of the enhanced image can be ensured to reach a high quality standard, and the enhanced image after enhancement and the image content of the original image before enhancement can be ensured to have strong correlation, so that the image enhancement can be accurately and efficiently carried out.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image enhancement method provided by the present invention;
FIG. 2 is a schematic flowchart of a training method of an image enhancement model in the image enhancement method provided by the present invention;
FIG. 3 is a second schematic flowchart of a method for training an image enhancement model in an image enhancement method according to the present invention;
FIG. 4 is a schematic structural diagram of a generation countermeasure network in the image enhancement method provided by the present invention;
FIG. 5 is a schematic structural diagram of an attention module in the image enhancement method provided by the present invention;
FIG. 6 is a schematic structural diagram of an image enhancement apparatus provided by the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, in order to achieve the effect of enhancing the image quality, relevant scholars propose an image enhancement method based on a spatial domain, and the method mainly takes pixel value gray scale mapping as a basis and comprises image enhancement methods such as histogram modification, gray scale transformation, spatial domain filtering and the like. The image enhancement method focuses on the problem of improving the contrast of an image, but when the gray scale spatial distribution of the image is unreasonable, the image enhancement effect after transformation is difficult to meet the requirement.
Still another related scholars propose an image enhancement method based on a frequency domain, wherein the frequency domain image enhancement means that an image is converted into a frequency domain space by convolution, frequency components within a certain range are suppressed, other components are not affected, and finally the image is converted into the image space, so that the purpose of image enhancement is achieved. The main frequency domain methods are low-pass or high-pass filtering, and homomorphic filtering, which aim to improve on a certain disadvantage of the image without paying attention to other problems of image distortion, such as artifacts and loss of detail.
And a correlative scholars provides an image enhancement method of a prior model, constructs a model based on specific prior knowledge of the image by knowing the principle of the image degradation process, and restores the degraded image to the image before degradation. Common models include an atmospheric turbulence model, a dark channel prior, a degradation model and the like; the physical model constructed by the method depends on theoretical knowledge of researchers, and the degradation reason of the image needs to be essentially known; in addition, a specific scene needs a specific model, and the generalization capability of the model cannot meet the requirement of practicability.
Aiming at the problems of poor generalization capability and poor accuracy, related scholars propose an image enhancement method based on deep learning, and learn the mapping relation between input and output by constructing a complex nonlinear network so as to efficiently and accurately realize image enhancement. However, due to the complexity of the imaging environment, it is difficult to acquire paired supervised data, which in turn leads to insufficient training data. Therefore, it is impossible to obtain a deep learning model that can accurately enhance an image, and it is difficult to accurately and efficiently enhance an image.
In view of the above problems, the present embodiment provides an image enhancement method. Fig. 1 is a schematic flowchart of the image enhancement method provided in the present invention, and as shown in fig. 1, the method includes:
step 101, acquiring a target image;
the target image may include an image that needs image enhancement, such as an image that needs contrast enhancement, detail texture enhancement, saturation enhancement, and the like. The image may be acquired in real time by an image acquisition device, where the image acquisition device may be a camera, a smart phone, a tablet computer, or an intelligent electrical appliance, such as a television, an air conditioner, and the like, which includes an image acquisition function, and after the image acquisition device acquires a target image by a camera, the image acquisition device may also perform noise reduction and/or normalization processing on the target image, and the embodiment is not limited specifically.
Step 102, inputting the target image into an image enhancement model to obtain an enhanced image of the target image;
the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
The image enhancement model is generated based on a generator construction in the generation countermeasure network, and includes, but is not limited to, an encoder, an attention module, a residual error module, and a decoder, which is not specifically limited in this embodiment.
The discriminator may be implemented by a markov discriminator (called patchGAN) structure, and outputs a two-dimensional matrix through a convolutional layer with four convolutional kernels of 4 × 4 and a step size of 2.
The image enhancement model carries out multi-level feature learning on the input low-quality image to reconstruct and generate a high-quality image, and further accurately and efficiently realizes image enhancement.
Optionally, before performing step 102, the image enhancement model may be trained in advance, and the specific training step includes:
a generative confrontation network is first created, where the generative confrontation network may contain a generator G for effecting a conversion (x to x) of the degraded image to the enhanced image
Figure BDA0004032076760000081
The conversion of (c); a discriminator D for discriminating between the real high-quality image y and the generated image->
Figure BDA0004032076760000082
The authenticity is judged; and the characteristic extraction model is used for assisting comparison learning.
The generator and the discriminator can be modules which are prepared for generating the antagonistic learning after the initialization of parameters, and also can be modules which are trained in advance and have the function of generating the antagonistic learning; similarly, the feature extraction model may be a module prepared for feature extraction after parameter initialization, or may be a pre-trained feature extraction model with a feature extraction function, which is not specifically limited in this embodiment of the present invention.
In addition, a sample image library is required to be constructed, and degraded low-quality images in the sample image library, namely images with quality such as contrast, detail texture and saturation which are not in accordance with expectations, are combined into a first sample image set (hereinafter referred to as an X domain); the desired high-quality images, i.e., images with expected qualities such as contrast, detail texture, and saturation, constitute a second sample image set (hereinafter referred to as Y domain).
Extracting a batch of sample images from a first sample image set, wherein the batch of sample images comprise a first sample image to be subjected to image enhancement and a second sample image except the first sample image; and a third sample image is extracted from the second sample image set. The first sample image and the third sample image are not paired in advance, that is, the image content of the first sample image and the image content of the third sample image may be the same or different.
Assuming that the batch size during training is set to n, the first sample image may be characterized as x and the corresponding sample enhanced image as
Figure BDA0004032076760000091
The other images in the same batch, i.e., the second sample image, can be characterized as { x } 1 ,x 2 ,…,x n-1 }。
Then, inputting the first sample image into a generator for generating the countermeasure network, performing image enhancement on the first sample image by the generator to obtain a sample enhanced image of the first sample image, and performing contrast learning on the sample enhanced image, the first sample image and the second sample image by the feature extraction model to obtain contrast loss; performing countermeasure learning on the sample enhanced image and a third sample image in the second sample image set by a discriminator to obtain countermeasure loss; unsupervised training of the antagonistic network was generated based on the antagonistic loss and the contrast loss.
Because of this, the penalty is resisted to ensure that a high quality enhanced image is generated with image quality close to that of the Y domain image; however, since the training data is unpaired, resulting in a lack of real value constraint for the enhanced image, the generator G may map the original image before enhancement of the input to other outputs that conform to the Y domain. Therefore, the embodiment effectively ensures that the enhanced image after enhancement has strong correlation with the image content of the original image before enhancement by adopting contrast loss on the basis of resisting loss.
And obtaining the trained generation countermeasure network, and constructing and forming the image enhancement model by using the generators in the trained generation countermeasure network.
After the target image is acquired, feature extraction and data reconstruction may be performed on the target image based on an image enhancement model, thereby automatically outputting an enhanced image of the target image, i.e., a high-quality target image.
It can be understood that, for the problem that the number of paired image samples in some specific scenes is small, the embodiment adopts an unsupervised training method based on contrast loss and counterloss, and can train an image enhancement model on unpaired data, thereby effectively improving the practicability of the method, and in principle, can train the model on an unsupervised training set, and enhance images from multiple aspects of contrast enhancement, detail improvement and the like, so that the image enhancement has flexibility and generalization.
According to the image enhancement method provided by the embodiment, the image enhancement model is subjected to unsupervised training through combining the contrast loss and the countermeasure loss, so that the target image is subjected to image enhancement according to the image enhancement model, on one hand, paired supervised data do not need to be obtained additionally, the influence caused by the lack of the paired supervised data is effectively reduced, and the training efficiency and accuracy are higher; on the other hand, the image quality of the enhanced image can be ensured to reach a high quality standard, and the enhanced image after enhancement and the image content of the original image before enhancement have strong correlation, so that the image enhancement is accurately and efficiently carried out.
In some embodiments, fig. 2 is a schematic flowchart of a training method of an image enhancement model provided in this embodiment, and includes the following steps:
step 201, inputting the first sample image, the second sample image and the sample enhanced image into a feature extraction model to obtain a feature map of the first sample image, a feature map of the second sample image and a feature map of the sample enhanced image;
step 202, performing contrast learning according to the feature map of the first sample image, the feature map of the second sample image, and the feature map of the sample enhanced image to obtain the contrast loss;
step 203, inputting the sample enhanced image and the third sample image into the discriminator for generating the countermeasure network, and performing countermeasure learning to obtain the countermeasure loss;
204, performing countermeasure training on the parameter iteration of the generated countermeasure network according to the countermeasure loss and the contrast loss;
and step 205, constructing the image enhancement model according to the generator in the trained generation countermeasure network.
The feature extraction model can be generated based on a pre-training network construction of VGG-19 (Visual Geometry Group19, visual Geometry Group of 19 layers).
Optionally, the training step of the image enhancement model specifically includes:
in contrast learning process, image is enhanced with samples of first sample image x
Figure BDA0004032076760000111
As an anchor point, then the first sample image x with which the data content should be consistent may be taken as a positive sample, the second sample image { x ] in the same batch 1 ,x 2 ,…,x n-1 Define as negative samples. Through comparison and learning between the anchor point and the positive sample and the negative sample, the distance between the anchor point and the positive sample is smaller and smaller, the distance between the anchor point and the negative sample is as large as possible, and the consistency of the image content before and after input and output can be realized without the constraint of a real value. />
Therefore, the first sample image, the second sample image and the sample enhanced image are respectively input into the feature extraction model, and the feature extraction model performs multi-scale feature extraction on the first sample image, the second sample image and the sample enhanced image to obtain a feature map of the first sample image, a feature map of the second sample image and a feature map of the sample enhanced image.
Then, the contrast loss is determined according to the similarity distance between the feature map of the first sample image and the feature map of the sample enhanced image and the similarity distance between the feature map of the second sample image and the feature map of the sample enhanced image.
In the contrast learning process, the generator G aims to minimize contrast loss to generate high-quality images close to the target domain Y
Figure BDA0004032076760000112
The goal of discriminator D is to maximize the penalty on confrontation and improve the ability to discriminate between true and false.
Optionally, the sample enhanced image and the third sample image are input into the discriminator, and the discriminator performs the countermeasure learning to obtain the countermeasure loss.
It should be noted that the execution sequence of steps 201 and 203 may be parallel or may be executed after one is executed, and this embodiment is not specifically limited to this.
Under the condition of obtaining the contrast loss and the countermeasure loss, unsupervised countermeasure training can be carried out on the generated countermeasure network according to the contrast loss and the countermeasure loss so as to obtain the generated countermeasure network after training.
And obtaining the trained generation countermeasure network, and constructing and forming the image enhancement model by using the generators in the trained generation countermeasure network.
According to the image enhancement method provided by the embodiment, the image enhancement model is subjected to unsupervised countermeasure training through combining the contrast loss and the countermeasure loss, so that the image quality of an enhanced image generated by the trained image enhancement model reaches a high-quality standard, strong correlation exists between the image quality of the enhanced image and the image content of an original image before enhancement, and the effectiveness and accuracy of image enhancement are effectively ensured.
In some embodiments, step 202 further comprises:
acquiring a first similarity distance between the feature map of the first sample image and the feature map of the sample enhanced image, and a second similarity distance between the feature map of the second sample image and the feature map of the sample enhanced image;
determining the contrast loss according to the first similarity distance and the second similarity distance; the contrast loss is targeted at the minimum of the first similarity distance and at the maximum of the second similarity distance.
Wherein, the similarity distance can be l norm distance, specifically can set up according to the actual demand.
Optionally, calculating a difference between the feature map of the sample enhanced image and the feature map of the first sample image, and calculating an l-norm distance of the difference to obtain a first similarity distance; and calculating the difference between the feature map of the sample enhanced image and the feature map of the second sample image, and calculating the l-norm distance of the difference to obtain a second similarity distance. Then, the addition result of the first similarity distance and the second similarity distances corresponding to all the second sample images is divided to determine the contrast loss, and the specific calculation formula is as follows:
Figure BDA0004032076760000121
wherein L is CL For contrast loss, L is the number of characteristic layers of the characteristic extraction model; w is a l Is the weight coefficient of the l-th layer feature layer, F l () is a feature map of the l-th layer feature layer output; x, x,
Figure BDA0004032076760000122
And x i A first sample image, a sample enhanced image of the first sample image, and a second sample image, respectively; | is the l-norm distance. By minimizing L CL Contrast constraints are implemented.
The embodiment realizes that the enhanced image generated by the generator has strong correlation with the image content of the original image before enhancement without depending on paired sample image pairs, and effectively ensures the effectiveness and accuracy of image enhancement.
In some embodiments, step 203 further comprises:
inputting the sample enhanced image and the third sample image into the discriminator to obtain a discrimination result of the sample enhanced image and a discrimination result of the third sample image;
and determining the countermeasure loss according to the discrimination result of the sample enhanced image and the discrimination result of the third sample image.
Optionally, in the antagonism learning process, the generator G aims to minimize the antagonism loss to generate high-contrast, high-quality images close to the target domain Y
Figure BDA0004032076760000131
The goal of discriminator D is to maximize the confrontationLoss is reduced to improve the capability of judging true and false, and the specific calculation formula is as follows:
L GAN (G,D,x,y)=E y~f(y) [logD(y)]+E x~f(x) [log(1-D(G(x)))];
wherein L is GAN (. To combat losses; x and y obey the true distributions f (x) and f (y) of the sample images, E (-) represents the mathematical expectation of the distribution function, D (y) characterizes the discrimination result for the third sample image; d (G (x)) characterizes the discrimination of the sample enhanced image for the first sample image.
In some embodiments, as shown in fig. 3, a second flowchart of the training method for the image enhancement model provided in this embodiment is shown, and step 204 further includes:
step 301, for the current countermeasure training, fixing the parameters of the discriminator obtained from the last countermeasure training, and training the parameters of the generator obtained from the last countermeasure training with the goal of minimizing the fusion result between the countermeasure loss and the contrast loss to obtain a generator corresponding to the current countermeasure training;
step 302, fixing the generator corresponding to the current countermeasure training, taking the maximization of the countermeasure loss as an optimization target, and training the parameters of the discriminator obtained by the last countermeasure training to obtain the discriminator corresponding to the current countermeasure training;
step 303, iteratively executing the next countermeasure training based on the discriminator corresponding to the current countermeasure training and the generator corresponding to the current countermeasure training, namely returning to step 301 until the generated countermeasure network meets a preset termination condition;
and step 304, acquiring the trained generation countermeasure network.
Optionally, in the loss function part, the discriminator D maximizes the opposing loss L GAN (. Cndot.), optimizing the model parameters to make the discriminator D improve the judgment of true (y) and false
Figure BDA0004032076760000141
The ability of the cell to perform. Generator G minimizes the penalty L GAN (-) optimizing model parameters such that generated &>
Figure BDA0004032076760000142
Closer to a high quality image. Simultaneous generator G at contrast loss L CL Under the restriction of (4), make->
Figure BDA0004032076760000143
Original image content is kept, and therefore effectiveness and accuracy of image enhancement are effectively improved.
For the current confrontation training, the confrontation training process comprises the following steps:
when training the generator, fixing the parameters of the discriminator obtained by the last confrontation training, training the parameters of the generator obtained by the last confrontation training, and minimizing the confrontation loss L GAN (. O) and fight against loss L GAN The sum of (·) is the target.
When training the discriminator, fixing the generator obtained by the current countermeasure training, training the parameters of the discriminator obtained by the last countermeasure training, and maximizing the countermeasure loss L GAN (. To) is a target;
and performing the countermeasure training through iteration, continuously optimizing the model parameters of the generated countermeasure network until the generated countermeasure network meets a preset termination condition, and stopping the countermeasure training to obtain the trained generated countermeasure network. The preset termination condition includes that the maximum number of iterations is reached and/or the generation performance of the generator meets the performance requirement, and the like, which is not specifically limited in this embodiment.
In this embodiment, the countermeasure training is performed through the countermeasure loss and the contrast loss, so that the image enhancement model which can effectively and accurately perform image enhancement is quickly and accurately acquired.
In some embodiments, the image enhancement model includes an encoder, an attention module, a residual module, and a decoder;
the inputting the target image into an image enhancement model to obtain an enhanced image of the target image includes:
inputting the target image into the encoder to perform downsampling to obtain a first feature map of the target image;
inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image;
inputting the second feature map into the residual error module to perform residual error operation to obtain a third feature map of the target image;
and inputting the third feature map into the decoder for up-sampling to obtain an enhanced image of the target image.
As shown in fig. 4, a schematic structure diagram of a generation countermeasure network; as shown in fig. 4, the purpose of the image enhancement model is to convert from a low-quality image to a high-quality image, including but not limited to an encoder (also called a downsampling layer) for extracting input features, a deep network of mapping transforms, i.e. an attention module and a residual module, and a decoder (also called an upsampling layer) for decoding the output of the deep features.
The network structures of the encoder, the attention module, the residual error module and the decoder can be constructed and generated based on various neural networks, and can be specifically set according to actual requirements. Illustratively, the encoder generates based on convolutional and pooling layer builds, or only convolutional layer builds; the attention module is constructed and generated based on the pooling layer, the convolution layer and the full-connection layer; the residual error module comprises a plurality of stacked residual error units, each residual error Unit is constructed and generated based on at least one group of stacked structures consisting of a convolutional layer and a normalization layer, such as a convolutional layer + normalization layer + ReLU (corrected Linear Unit) active layer + convolutional layer + normalization layer; the decoding module is generated based on a multi-layer deconvolution layer construction, such as two layers. The image enhancement model constructed by the structure can enhance images efficiently and accurately.
Optionally, step 102 further comprises:
reading a distorted target image according to the image resolution and the requirement; the size of the target image can be adaptively set according to the actual application scene, for example, the picture size is 3 × 256,3 is three color channels of the image, which are R (red), G (green), and B (blue), respectively, and 256 × 256 is the width and height of the image.
By the encoder, preliminary feature extraction is performed on the input target image and mapped to a high-dimensional space. Specifically, the encoder may increase the number of channels of the target image by convolution operation, for example, increase the number of channels of the target image by twice, decrease the size of the target image by maximum pooling operation, for example, decrease the size of the target image by half, to down-sample the target image, and output the first feature map, where the picture size may be 256 × 64; or the encoder expands the number of channels of the target image and reduces the size of the image at the same time by adopting convolution operation so as to down-sample the target image and output the first feature map, wherein the size of the image can be 256 × 64.
In order to pay attention to more important detailed information in an image, the flexibility of a model is enhanced, and the generalization performance of the method is improved. Then, the attention module extracts detail features of the first feature map to extract more detailed channel features, spatial texture features and the like in the target image, so as to obtain a second feature map of the target image.
And then, sequentially passing through each residual block in the residual module to perform multi-layer residual operation on the second characteristic diagram to complete characteristic mapping so as to obtain a third characteristic diagram.
Next, the decoder recovers the third feature map to 256 × 256, which is the same size as the original input target image, by multi-layer deconvolution, and the number of channels is reduced to 3, thereby outputting an enhanced image of the target image.
In this embodiment, the image enhancement model can efficiently and accurately mine more detailed features in the target image through the encoder, the attention module, the residual error module and the decoder, so that the image enhancement result is more effective and accurate.
In some embodiments, the attention module includes a channel attention unit and a spatial attention unit;
the inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image includes:
inputting the first feature map into the channel attention unit, calculating channel weights of the first feature map in channel dimensions, and adjusting the feature sub map in each channel dimension in the first feature map according to the channel weights;
inputting the adjusted first feature map into the spatial attention unit, calculating spatial position weight of the adjusted first feature map in spatial dimension, and adjusting the feature sub-map in each spatial dimension in the adjusted first feature map according to the spatial position weight to obtain the second feature map.
FIG. 5 is a schematic diagram of the attention module; as shown in fig. 5, the Attention Module is constructed by a CSAM (Channel and Spatial Attention Module) Module, including a Channel Attention unit and a Spatial Attention unit.
The channel attention unit is used to extract channel features including, but not limited to, global average pooling, multi-layer fully-connected layers, and feature fusion layers.
The spatial attention unit is used to extract spatial detail features including, but not limited to, global average pooling, convolutional layers, and feature fusion layers.
Optionally, after the first feature map is acquired, the first feature map may be passed through a channel attention unit to learn more important channel information.
Firstly, the first feature graph passes through a global average pooling layer, 256 × 1 features are output, then the first feature graph sequentially passes through two fully-connected layers, the number of channels is sequentially changed from 256 to 16 and then to 256, channel weights are obtained through a Sigmoid activation function after the channel weights are changed, the channel weights are subjected to element multiplication with the originally input first feature graph, so that feature subgraphs on each channel dimension in the first feature graph are adjusted, and the first feature graph with the channel weights being redistributed is obtained.
The adjusted first feature map output by the channel attention unit is then passed through a spatial attention unit to focus on more spatially dimensionally important object information and texture details.
Optionally, the adjusted first feature map is firstly subjected to global average pooling to obtain a 1 × 64 feature map, then a spatial position weight of [0-1] is obtained through a Sigmoid activation function after convolution of 1 × 1, and the spatial position weight is subjected to element multiplication with the adjusted first feature map to adjust the feature subgraph in each spatial dimension in the first feature map to obtain a second feature map with different spatial position weights.
Compared with the existing unsupervised image enhancement method, the detail texture of the image is not well reserved, and important information is easily lost.
The following describes the image enhancement device provided by the present invention, and the image enhancement device described below and the image enhancement method described above can be referred to correspondingly.
As shown in fig. 6, the present embodiment provides an image enhancement apparatus including an acquisition unit 601 and an image enhancement unit 602, wherein:
the acquisition unit 601 is used for acquiring a target image;
the image enhancement unit 602 is configured to input the target image into an image enhancement model, so as to obtain an enhanced image of the target image;
the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the antagonistic network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
In some embodiments, the apparatus further comprises a training unit for:
inputting the first sample image, the second sample image and the sample enhanced image into a feature extraction model to obtain a feature map of the first sample image, a feature map of the second sample image and a feature map of the sample enhanced image;
according to the feature map of the first sample image, the feature map of the second sample image and the feature map of the sample enhanced image, performing contrast learning to obtain the contrast loss;
inputting the sample enhanced image and the third sample image into the discriminator for generating the confrontation network, and performing confrontation learning to obtain the confrontation loss;
performing countermeasure training on the parameter iteration for generating the countermeasure network according to the countermeasure loss and the contrast loss;
and constructing the image enhancement model according to the generator in the trained generation countermeasure network.
In some embodiments, the training unit is further configured to:
acquiring a first similarity distance between the feature map of the first sample image and the feature map of the sample enhanced image, and a second similarity distance between the feature map of the second sample image and the feature map of the sample enhanced image;
determining the contrast loss according to the first similarity distance and the second similarity distance; the contrast loss is targeted at the minimum of the first similarity distance and the maximum of the second similarity distance.
In some embodiments, the training unit is further configured to:
inputting the sample enhanced image and the third sample image into the discriminator to obtain a discrimination result of the sample enhanced image and a discrimination result of the third sample image;
and determining the countermeasure loss according to the discrimination result of the sample enhanced image and the discrimination result of the third sample image.
In some embodiments, the training unit is further configured to:
for the current countermeasure training, fixing the parameters of the discriminator obtained by the last countermeasure training, taking the fusion result between the countermeasure loss and the contrast loss as a target, and training the parameters of the generator obtained by the last countermeasure training to obtain a generator corresponding to the current countermeasure training;
fixing a generator corresponding to the current countermeasure training, taking the maximization of the countermeasure loss as an optimization target, and training the parameters of the discriminators obtained by the last countermeasure training to obtain discriminators corresponding to the current countermeasure training;
and iteratively executing next confrontation training based on the discriminator corresponding to the current confrontation training and the generator corresponding to the current confrontation training until the generated confrontation network meets a preset termination condition.
In some embodiments, the image enhancement model includes an encoder, an attention module, a residual module, and a decoder;
the image enhancement unit 602 is specifically configured to:
inputting the target image into the encoder to perform downsampling to obtain a first feature map of the target image;
inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image;
inputting the second feature map into the residual error module to perform residual error operation to obtain a third feature map of the target image;
and inputting the third feature map into the decoder for up-sampling to obtain an enhanced image of the target image.
In some embodiments, the attention module includes a channel attention unit and a spatial attention unit;
an image enhancement unit 602, further configured to:
inputting the first feature map into the channel attention unit, calculating channel weights of the first feature map in channel dimensions, and adjusting the feature sub map in each channel dimension in the first feature map according to the channel weights;
inputting the adjusted first feature map into the spatial attention unit, calculating spatial position weight of the adjusted first feature map in spatial dimension, and adjusting the feature sub-map in each spatial dimension in the adjusted first feature map according to the spatial position weight to obtain the second feature map.
Fig. 7 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 7: a processor (processor) 701, a communication Interface (Communications Interface) 702, a memory (memory) 703 and a communication bus 704, wherein the processor 701, the communication Interface 702 and the memory 703 are in communication with each other via the communication bus 704. The processor 701 may invoke logic instructions in the memory 703 to perform an image enhancement method comprising: acquiring a target image; inputting the target image into an image enhancement model to obtain an enhanced image of the target image; the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
In addition, the logic instructions in the memory 703 can be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program, the computer program being storable on a non-transitory computer-readable storage medium, the computer program, when executed by a processor, being capable of executing the image enhancement method provided by the above methods, the method comprising: acquiring a target image; inputting the target image into an image enhancement model to obtain an enhanced image of the target image; the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements an image enhancement method provided by the above methods, the method comprising: acquiring a target image; inputting the target image into an image enhancement model to obtain an enhanced image of the target image; the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image enhancement method, comprising:
acquiring a target image;
inputting the target image into an image enhancement model to obtain an enhanced image of the target image;
the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
2. The image enhancement method of claim 1, wherein the image enhancement model is trained based on the following steps:
inputting the first sample image, the second sample image and the sample enhanced image into a feature extraction model to obtain a feature map of the first sample image, a feature map of the second sample image and a feature map of the sample enhanced image;
according to the feature map of the first sample image, the feature map of the second sample image and the feature map of the sample enhanced image, performing contrast learning to obtain the contrast loss;
inputting the sample enhanced image and the third sample image into the discriminator for generating the confrontation network, and performing confrontation learning to obtain the confrontation loss;
performing countermeasure training on the parameter iteration for generating the countermeasure network according to the countermeasure loss and the contrast loss;
and constructing the image enhancement model according to the generator in the trained generation countermeasure network.
3. The image enhancement method according to claim 2, wherein the obtaining of the contrast loss by performing contrast learning based on the feature map of the first sample image, the feature map of the second sample image, and the feature map of the sample enhanced image comprises:
acquiring a first similarity distance between the feature map of the first sample image and the feature map of the sample enhanced image, and a second similarity distance between the feature map of the second sample image and the feature map of the sample enhanced image;
determining the contrast loss according to the first similarity distance and the second similarity distance; the contrast loss is targeted at the minimum of the first similarity distance and at the maximum of the second similarity distance.
4. The image enhancement method according to claim 2, wherein the inputting the sample enhanced image and the third sample image into the discriminator for generating the confrontational network to perform the confrontational learning to obtain the confrontational loss comprises:
inputting the sample enhanced image and the third sample image into the discriminator to obtain a discrimination result of the sample enhanced image and a discrimination result of the third sample image;
and determining the confrontation loss according to the discrimination result of the sample enhanced image and the discrimination result of the third sample image.
5. The image enhancement method of claim 2, wherein the iteratively performing countermeasure training on the parameters for generating the countermeasure network according to the countermeasure loss and the contrast loss comprises:
for the current countermeasure training, fixing the parameters of the discriminator obtained by the last countermeasure training, taking the fusion result between the countermeasure loss and the contrast loss as a target, and training the parameters of the generator obtained by the last countermeasure training to obtain a generator corresponding to the current countermeasure training;
fixing a generator corresponding to the current countermeasure training, taking the maximization of the countermeasure loss as an optimization target, and training the parameters of the discriminators obtained by the last countermeasure training to obtain discriminators corresponding to the current countermeasure training;
and iteratively executing next confrontation training based on the discriminator corresponding to the current confrontation training and the generator corresponding to the current confrontation training until the generated confrontation network meets a preset termination condition.
6. The image enhancement method of any one of claims 1 to 5, wherein the image enhancement model comprises an encoder, an attention module, a residual module and a decoder;
the inputting the target image into an image enhancement model to obtain an enhanced image of the target image includes:
inputting the target image into the encoder to perform downsampling to obtain a first feature map of the target image;
inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image;
inputting the second feature map into the residual error module to perform residual error operation to obtain a third feature map of the target image;
and inputting the third feature map into the decoder for up-sampling to obtain an enhanced image of the target image.
7. The image enhancement method of claim 6, wherein the attention module comprises a channel attention unit and a spatial attention unit;
the inputting the first feature map into the attention module for detail feature extraction to obtain a second feature map of the target image includes:
inputting the first feature map into the channel attention unit, calculating channel weights of the first feature map in channel dimensions, and adjusting the feature sub map in each channel dimension in the first feature map according to the channel weights;
inputting the adjusted first feature map into the spatial attention unit, calculating spatial position weight of the adjusted first feature map in spatial dimensions, and adjusting the feature sub-map in each spatial dimension in the adjusted first feature map according to the spatial position weight to obtain the second feature map.
8. An image enhancement apparatus, comprising:
an acquisition unit configured to acquire a target image;
the image enhancement unit is used for inputting the target image into an image enhancement model to obtain an enhanced image of the target image;
the image enhancement model is obtained by carrying out unsupervised training on an antagonistic network based on the antagonistic loss and the contrast loss; the contrast loss is obtained by performing contrast learning based on a first sample image and a second sample image in a first sample image set and a sample enhanced image of the first sample image output by a generator in the countermeasure network; the confrontation loss is obtained by confrontation learning based on the sample enhanced image and a third sample image in the second sample image set; the quality of the images in the first sample image set is lower than the quality of the images in the second sample image set.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the image enhancement method according to any one of claims 1 to 7 when executing the program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the image enhancement method according to any one of claims 1 to 7.
CN202211740347.0A 2022-12-30 2022-12-30 Image enhancement method and device, electronic equipment and storage medium Pending CN115953317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211740347.0A CN115953317A (en) 2022-12-30 2022-12-30 Image enhancement method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211740347.0A CN115953317A (en) 2022-12-30 2022-12-30 Image enhancement method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115953317A true CN115953317A (en) 2023-04-11

Family

ID=87285786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211740347.0A Pending CN115953317A (en) 2022-12-30 2022-12-30 Image enhancement method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115953317A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522754A (en) * 2023-10-25 2024-02-06 广州极点三维信息科技有限公司 Image enhancement method, device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522754A (en) * 2023-10-25 2024-02-06 广州极点三维信息科技有限公司 Image enhancement method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110827216B (en) Multi-generator generation countermeasure network learning method for image denoising
CN106683048B (en) Image super-resolution method and device
CN112819910B (en) Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
EP4163832A1 (en) Neural network training method and apparatus, and image processing method and apparatus
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN110517352B (en) Three-dimensional reconstruction method, storage medium, terminal and system of object
CN111861894A (en) Image motion blur removing method based on generating type countermeasure network
Lyu et al. DeGAN: Mixed noise removal via generative adversarial networks
CN114936979B (en) Model training method, image denoising method, device, equipment and storage medium
Rivadeneira et al. Thermal image super-resolution challenge-pbvs 2021
CN112927137A (en) Method, device and storage medium for acquiring blind super-resolution image
CN115953317A (en) Image enhancement method and device, electronic equipment and storage medium
CN115861094A (en) Lightweight GAN underwater image enhancement model fused with attention mechanism
Ahmed et al. PIQI: perceptual image quality index based on ensemble of Gaussian process regression
CN116309178A (en) Visible light image denoising method based on self-adaptive attention mechanism network
CN113538616B (en) Magnetic resonance image reconstruction method combining PUGAN with improved U-net
CN114283058A (en) Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization
US20240054605A1 (en) Methods and systems for wavelet domain-based normalizing flow super-resolution image reconstruction
CN115439849B (en) Instrument digital identification method and system based on dynamic multi-strategy GAN network
CN112446835A (en) Image recovery method, image recovery network training method, device and storage medium
CN113705358B (en) Multi-angle side face normalization method based on feature mapping
CN116385281A (en) Remote sensing image denoising method based on real noise model and generated countermeasure network
CN115688234A (en) Building layout generation method, device and medium based on conditional convolution
CN116137043A (en) Infrared image colorization method based on convolution and transfomer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination