CN111899185A - Training method and device of image noise reduction model, electronic equipment and storage medium - Google Patents

Training method and device of image noise reduction model, electronic equipment and storage medium Download PDF

Info

Publication number
CN111899185A
CN111899185A CN202010557928.5A CN202010557928A CN111899185A CN 111899185 A CN111899185 A CN 111899185A CN 202010557928 A CN202010557928 A CN 202010557928A CN 111899185 A CN111899185 A CN 111899185A
Authority
CN
China
Prior art keywords
noise
image
free
model
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010557928.5A
Other languages
Chinese (zh)
Inventor
郑海荣
刘新
张娜
胡战利
薛恒志
梁栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010557928.5A priority Critical patent/CN111899185A/en
Publication of CN111899185A publication Critical patent/CN111899185A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the specification discloses a training method and device of an image noise reduction model, electronic equipment and a computer readable storage medium. The method comprises the following steps: inputting the noisy original image into a first generation model of a countermeasure network in a cyclic consistency mode to generate a first noise-free generated image; inputting the first noise-free generated image into a second generation model of the countermeasure network in a cyclic consistency mode, and generating a second noise-carrying generated image; inputting the noise-free original images into a second generation model of a countermeasure network in a cyclic consistency mode to generate a first noise-carrying generated image; inputting the first noise-carrying generated image into a first generation model for generating a countermeasure network in a cyclic consistency mode, and generating a second noise-free generated image; and optimizing the confrontation network generated by the cycle consistency to obtain a trained image noise reduction model.

Description

Training method and device of image noise reduction model, electronic equipment and storage medium
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a training method and device for an image noise reduction model, electronic equipment and a computer-readable storage medium.
Background
Magnetic Resonance Imaging (MRI) technology is a technology for reconstructing human tissue images by using nuclear Magnetic Resonance phenomenon, can provide rich human tissue image information without ionizing damage to human bodies, and is a clinical medical examination means widely applied in China.
Due to the limitation of various factors such as a magnetic resonance imaging mechanism, a scanning speed, target motion and the like, an image acquired by a magnetic resonance imaging scanner may substitute for obvious noise, and the obtained image may not meet the standard of clinical diagnosis, so that the accuracy of clinical diagnosis is reduced. Therefore, in order to ensure image quality while performing fast imaging, it is necessary to reduce noise of an image.
At present, the adopted deep learning method mainly learns the mapping relation from the image with noise to the image without noise in the noise reduction direction, but the learning efficiency of the learning method is not high, and the quality of the generated noise reduction image is relatively low.
Disclosure of Invention
Embodiments of the present specification provide a training method and apparatus for an image denoising model, an electronic device, and a computer-readable storage medium, so as to solve the problems that the image denoising quality is relatively low and the learning efficiency of a deep learning manner is not high.
The embodiment of the specification adopts the following technical scheme:
a training method of an image noise reduction model comprises the following steps: acquiring an image sample set; wherein the image sample set comprises: a noisy original image and a noise-free original image corresponding to the noisy original image; inputting the noisy original image into a cycle consistency to generate a first generation model of a countermeasure network, and generating a first noise-free generation image; inputting the first noise-free generated image into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-carrying generated image; inputting the noise-free original image into a second generation model of the cyclic consistency generation countermeasure network to generate a first noise-carrying generated image; inputting the first noise-carrying generated image into a first generation model of the cyclic consistency generation countermeasure network to generate a second noise-free generated image; inputting the first noise-free generated image and the noise-free original image into a first discrimination model of the cyclic consistency generation countermeasure network to obtain a first discrimination result of whether the first noise-free generated image generates an image category; inputting the first noise-carrying generated image and the noise-carrying original image into a second judgment model of the cyclic consistency generation countermeasure network to obtain a second judgment result of whether the first noise-carrying generated image generates an image category or not; and optimizing the loop consistency generation countermeasure network according to the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image so as to obtain a trained image noise reduction model.
An image denoising method based on the training method of the image denoising model, the image denoising method comprising: acquiring an original image with noise; and inputting the original image with the noise into a trained image noise reduction model to generate a noise-reduced image.
An apparatus for training an image noise reduction model, comprising: the acquisition module is used for acquiring an image sample set; wherein the image sample set comprises: a noisy original image and a noise-free original image corresponding to the noisy original image; the first generation module is used for inputting the noisy original images acquired by the acquisition module into a cycle consistency to generate a first generation model of a countermeasure network and generate a first noise-free generated image; the second generation module is used for inputting the first noise-free generated image generated by the first generation module into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-carrying generated image; the third generation module is used for inputting the noise-free original image acquired by the acquisition module into the second generation model of the cyclic consistency generation countermeasure network to generate a first noise-carrying generated image; a fourth generation module, configured to input the first noise-carrying generated image generated by the third generation module into the first generation model of the cyclic consistency generation countermeasure network, and generate a second noise-free generated image; a first judging module, configured to input the first noise-free generated image generated by the first generating module and the noise-free original image acquired by the acquiring module into the first judging model of the cyclic consistency generation countermeasure network, so as to obtain a first judging result of whether the first noise-free generated image generates an image category; a second judging module, configured to input the first noise-carrying generated image generated by the third generating module and the noise-carrying original image acquired by the acquiring module into the second judging model of the cyclic consistency generation countermeasure network, so as to obtain a second judging result of whether the first noise-carrying generated image generates an image category; and the optimization module is used for optimizing the cyclic consistency generation countermeasure network according to a first discrimination result obtained by the first discrimination module, a second discrimination result obtained by the second discrimination module, a first noise-free generated image generated by the first generation module, a noise-free original image obtained by the acquisition module, a second noise-containing generated image generated by the second generation module, a second noise-free generated image generated by the fourth generation module and a noise-containing original image obtained by the acquisition module, so as to obtain a trained image noise reduction model.
An electronic device, comprising: the image noise reduction model training method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, the steps of any one of the image noise reduction model training methods are realized.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of any of the methods of training an image noise reduction model.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the embodiment of the specification learns the mapping relation from the noisy image to the non-noisy image in the noise-reduced image generation direction, and also learns the mapping relation from the non-noisy image to the noisy image in the opposite direction due to the addition of the second generation model in the opposite direction of the noise-reduced image generation direction during the training process, so that the learned mapping relation of the first generation model can be corrected, the first generation model can form a correct mapping in the expected noise-reduced image generation direction, the deviation between the generated non-noisy image and the actual non-noisy image is reduced, the quality of the noise-reduced image generated by the image noise-reduced model is improved, and the learning capability and the learning efficiency of a neural network are improved.
On the other hand, the generated first noise-free generated image and the generated first noise-carrying generated image are further distinguished through the first distinguishing model and the second distinguishing model, and the distinguishing result is fed back to the optimization training of the cycle consistency generation countermeasure network. It is desirable to achieve the purpose of making the image generated by the first discrimination model as the non-generated image type by mistake, so that the noise-free image generated by the trained image noise reduction model is close to the actual noise-free image as much as possible, thereby further improving the quality of the noise-free image generated by the image noise reduction model.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the embodiments of the specification and are not intended to limit the embodiments of the specification unduly. In the drawings:
fig. 1 is a schematic flowchart of a training method of an image noise reduction model provided in an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a cycle consistency generation countermeasure network provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a network structure of a first generative model provided in an embodiment of the present disclosure;
fig. 4 is a schematic network structure diagram of a first discriminant model provided in an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an image denoising method provided in an embodiment of the present specification;
fig. 6 is a schematic structural diagram of a training apparatus for an image noise reduction model provided in an embodiment of the present specification;
fig. 7 is a schematic structural diagram of an image noise reduction device provided in an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of this specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the embodiments of the present disclosure will be described in detail and completely with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only some of the embodiments of the specification and not all of them. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the embodiments in the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
The embodiment of the specification provides a training method of an image noise reduction model, which is used for improving the learning efficiency of a neural network and the generation quality of a noise reduction image. The execution subject of the method includes, but is not limited to, a server, a personal computer, a notebook computer, a tablet computer, a smart phone, and the like, which can execute a predetermined process such as numerical calculation and/or logical calculation by running a predetermined program and instructions. The server can be a single network server, a server group consisting of a plurality of network servers, and a Cloud Computing (Cloud Computing) -based Cloud consisting of a large number of computers and network servers. In the embodiments of the present specification, the execution subject of the method is not limited. The flow diagram of the method is shown in figure 1, and comprises the following steps:
step 11: a sample set of images is acquired.
The image sample set may include: a noise-free original image and a noisy original image corresponding to the noise-free original image.
In practical applications, the noise-free raw image may be from a historically stored noise-free image.
The noise-added original image corresponding to the noise-free original image may be a noise-added image generated by adding analog noise to the noise-free original image. The analog noise may include Gaussian noise (Gaussian noise), salt and pepper noise (salt noise), and the like, and the embodiment of the present specification is not limited as to what kind of analog noise is added.
Step 12: and inputting the noisy original image into a loop to generate a first generation model of a countermeasure network, and generating a first noise-free generation image.
Step 13: and inputting the first noise-free generated image into a cycle consistency generation countermeasure network to generate a second generation model of the countermeasure network, and generating a second noise-carrying generated image.
Step 14: and inputting the noise-free original image into a loop consistency to generate a second generation model of the countermeasure network, and generating a first noise-carrying generation image.
Step 15: and inputting the first noise-carrying generated image into a cycle consistency generation countermeasure network to generate a first generation model of the countermeasure network, and generating a second noise-free generated image.
In one or more embodiments of the present description, the cycle consistency employed to generate the countermeasure network can include: the device comprises a first generation model, a second generation model, a first judgment model and a second judgment model.
For convenience of understanding the process of obtaining each generated image through the first generation model and the second generation model in steps 12 to 15, two generation models of the cyclic consistency countermeasure network shown in fig. 2 are explained herein: for a noisy original image: the noise-carrying original image is input into a first generation model from an input port 1, a first noise-free generation image is output, and the first noise-free generation image is input into a second generation model through an input port 2 to generate a second noise-carrying generation image. For a noise-free raw image: the noise-free original image is input into the second generation model from the input port 2, a first noise-carrying generation image is output, and the first noise-carrying generation image is input into the first generation model through the input port 1 to generate a second noise-free generation image.
In one or more embodiments of the present description, the first generative model may comprise a first convolutional network layer and a first deconvolution network layer; the second generative model may include a second convolutional network layer and a second deconvolution network layer; the first convolution network layer and the first deconvolution network layer are in jump connection, and the second convolution network layer and the second deconvolution network layer are in jump connection. The leap connection is adopted, so that the learning ability and the stability of the neural network can be improved, the phenomenon that the training effect of the deeper neural network is degraded due to the increase of the depth is avoided, the network training process is accelerated, and more details of the image are reserved.
In practical applications, the first convolutional network layer and the second convolutional network layer may respectively include at least one convolutional layer, and each convolutional layer in the at least one convolutional layer is connected in series, that is, an output graph of a previous convolutional layer may be used as an input graph of a next convolutional layer. The first deconvolution network layer and the second deconvolution network layer may respectively include at least one deconvolution layer, and each of the at least one deconvolution layers is connected in series, that is, an output graph of a previous deconvolution layer may be used as an input graph of a next deconvolution layer. Note that, the input map and the output map may both be referred to as feature maps (feature maps).
Fig. 3 is a schematic diagram of a network structure that may be an exemplary first generative model for generating a countermeasure network according to the cyclic consistency in the embodiment of the present specification.
In the first generative model, the first convolutional network layer and the first deconvolution network layer may include 4 convolutional layers and 4 deconvolution layers, respectively. In some embodiments, each convolution layer and each deconvolution layer may be further followed by a Batch normalization (Batch Norm3D) layer and an activation function layer, where the Batch normalization layer is configured to perform Batch normalization processing on the output map of the convolution layer or the deconvolution layer, so as to improve the gradient dispersion problem in the model, and the activation function layer is configured to add a non-Linear activation function to the output map of the convolution layer or the deconvolution layer, where the activation function may be, for example, a Linear rectification function (Rectified Linear Unit, ReLU), a Leaky Linear rectification function (leak Rectified Linear Unit, leak Relu), and the like, and the application of which activation function is not limited in this specification embodiment.
Fig. 3 also shows the number of convolution kernels for each convolutional layer and deconvolution layer, for example, "Conv 3D, 32" indicates that 32 three-dimensional convolution kernels are provided in this convolutional layer, and "DeConv 3D, 64" indicates that 64 three-dimensional convolution kernels are provided in this deconvolution layer. In the embodiment of the present specification, each convolution layer and each deconvolution layer may use a convolution kernel with a convolution kernel size of 3 × 3, and of course, convolution kernels with other sizes may also be used, which is not limited to this embodiment of the present specification.
The first convolution network layer is used for performing convolution processing on an input image layer by layer so as to extract image features deeply. The first layer of convolutional layer may be configured to perform convolution processing on the pixel values of the input noisy original image to obtain a feature map output by the first layer of convolutional layer, where the output feature map is used as an input map of a next second layer of convolutional layer, an output map of the second layer of convolutional layer is used as an input map of a third layer of convolutional layer, and so on. In practical applications, the convolution step size for each convolution layer to perform convolution processing may be 1. It should be noted that the input diagram and the output diagram referred to in one or more embodiments of the present specification refer to feature diagrams (feature maps).
In practical applications, the number of feature maps output by each convolutional layer may be the same as the number of convolution kernels of the convolutional layer, for example, the number of feature maps output by the first convolutional layer of the first convolutional layer shown in fig. 3 is 32, the number of feature maps output by the second convolutional layer is 64, the number of feature maps output by the third convolutional layer is 128, and the number of feature maps output by the fourth convolutional layer is 256.
In practical application, each feature map output by the first convolution network layer may represent a local feature of the extracted noisy original image, and then the first deconvolution network layer may restore the image size layer by layer according to the extracted local feature, that is, each feature map output by the first convolution network layer, to generate the first noise-free generated image.
Each deconvolution layer of the first deconvolution network layer shown in fig. 3 may be configured to perform a transpose convolution process on the feature maps output from the previous layer, so as to merge the feature maps layer by layer, for example, merge 256 feature maps into 128 feature maps, merge 128 feature maps into 64 feature maps, merge 64 feature maps into 32 feature maps, and correspondingly enlarge the feature map size layer by layer, and then connect the 32 feature maps through the last layer, thereby outputting the generated first noise-free generated image.
The network structure of the second generative model for generating the countermeasure network with cyclic consistency may be the same as the network structure of the first generative model, for example, as shown in fig. 3, a schematic network structure diagram of the second generative model may also be used. For the functions of the second convolutional network layer and the second deconvolution network layer of the second generative model, the above-mentioned description about the first convolutional network layer and the first deconvolution network layer may be referred to, and details are not repeated herein.
It should be noted that the first generative model and the second generative model of the above exemplary cycle-consistent generative countermeasure network are one implementation of the first generative model and the second generative model provided in the embodiments of the present specification, and do not represent all implementations of the embodiments of the present specification. The number of convolution layers, the number of convolution kernels, the size of the convolution kernels and the like included in the first generative model and the second generative model can be set according to actual requirements, and the embodiment of the present specification is not limited.
Step 16: and inputting the first noise-free generated image and the noise-free original image into a cycle consistency to generate a first discrimination model of the countermeasure network, and obtaining a first discrimination result of whether the first noise-free generated image is generated into an image category.
As described above, in one or more embodiments of the present specification, the cycle consensus antagonistic network includes a first generative model, a second generative model, a first discriminative model, and a second discriminative model.
In practical applications, considering that there may be a deviation between a first noise-free generated image generated from a noisy original image and a noise-free original image corresponding to the noisy original image during training, if the deviation can be fed back to the training of the first generation model of the cyclic consistency generation countermeasure network, the quality of the noise-free generated image generated by the first generation model can be further improved.
In this embodiment of the present specification, whether the generated first noise-free generated image is generated into an image category may be determined by the first determination model, so as to obtain a first determination result, so that the first determination result is fed back to the training of the first generation model of the cyclic consistency generation countermeasure network through a preset loss function, that is, a deviation between the first noise-free generated image and the noise-free original image is fed back to the training of the first generation model of the cyclic consistency generation countermeasure network.
In one or more embodiments of the present disclosure, the first discriminant model may be constructed based on a Convolutional Neural Network (CNN), for example, as shown in fig. 4, the first discriminant model may be a network structure diagram of the first discriminant model, as shown in fig. 4, the first discriminant model includes three Convolutional layers and one Fully connected layer (full connected), a batch normalization layer and an activation function layer are disposed behind each Convolutional layer, the number of convolution kernels in each Convolutional layer is shown in fig. 4, and is not described again, and each Convolutional layer adopts a convolution kernel with a convolution kernel size of 3 × 3. Of course, other types of neural network structures may be used, and the embodiments of the present disclosure are not limited thereto.
In practical applications, in order to enable the first discriminant model to accurately output the first discriminant result during the training process of generating the countermeasure network for the cyclic consistency, it may be understood that the first discriminant model is configured to discriminate the first noise-free generated image as a generated image class as much as possible, and in one or more embodiments of the present specification, the optimizing the first discriminant model of generating the countermeasure network for the cyclic consistency by using the fourth loss function may include: and adjusting the model parameters of the first discriminant model through the fourth loss function to optimize the first discriminant model, so that the first discriminant model can accurately output the first discriminant result.
In one or more embodiments of the present description, the fourth loss function is
Figure BDA0002545153620000092
Figure BDA0002545153620000091
Wherein x denotes a noisy original image, y denotes a non-noisy original image, G1(x) Representing a first noise-free generated image, D1Representing a first discriminant model, D1(G1(x) Denotes a first discrimination result, ExExpressing the mathematical expectation of the function when the input is a noisy original image; d1(y) a discrimination result of the first discrimination model on whether the noise-free original image is of the original image type or not is represented; eyThe mathematical expectation of the function representing the input as a noise-free raw image.
In practical applications, adjusting the model parameters of the first discriminant model by the fourth loss function may include: and calculating a loss value of the discrimination result of the first discrimination model through a fourth loss function, and feeding the loss value back to the first discrimination model in a back propagation mode to adjust the model parameters.
In practical applications, since the input data of the first discriminant model includes a first noise-free generated image, and the first noise-free generated image may be generated by the first generated model, it may be considered that the first discriminant model and the first generated model may be trained simultaneously. The training effect of the first generation model and the training effect of the second generation model are opposite, namely, the first discrimination model is trained to be expected to achieve the purpose that the first noise-free generation image is discriminated as the non-generation image type, and the training of the first discrimination model can be expected to improve the accuracy of the model discrimination so as to discriminate the first noise-free generation image as the generation image type as much as possible. Because the first discrimination result output by the first discrimination model can be fed back to the training of the first generation model through the preset loss function, the accuracy of discrimination of the first discrimination model is improved, that is, the accuracy of the output first discrimination result is improved, which is equivalent to the improvement of the quality of the noise-free generated image generated by the first generation model.
In this embodiment of the present specification, whether the generated first noise-free generated image is generated into an image category may be determined by the first determination model, so as to obtain a first determination result, and then the first determination result is fed back to the training of the cyclic consistency generation countermeasure network through a preset loss function, thereby improving the quality of the noise-free generated image generated by the first generation model of the cyclic consistency generation countermeasure network.
And step 17: and inputting the first noise-carrying generated image and the noise-carrying original image into a cyclic consistency to generate a second judgment model of the countermeasure network, and obtaining a second judgment result of whether the first noise-carrying generated image is generated into an image category or not.
In practical applications, the second generative model for generating the countermeasure network through the cyclic consistency can also learn the mapping relationship from the noise-free original image to the noise-containing original image, and in order to improve the learning effect of the neural network, the quality of the noise-containing generative image generated by the second generative model can be improved based on the same consideration as that in step 16, that is, the deviation between the first noise-containing generative image and the noise-containing original image corresponding to the noise-free original image is fed back to the training of the second generative model.
In this embodiment of the present specification, whether the generated first noise-carrying generated image is a generated image category may be determined by the second determination model, so as to obtain a second determination result, so that the second determination result is fed back to the training of the second generation model of the cyclic consistency generation countermeasure network through a preset loss function, that is, the deviation between the first noise-carrying generated image and the noise-carrying original image is fed back to the training of the first generation model of the cyclic consistency generation countermeasure network.
In one or more embodiments of the present disclosure, the pre-constructed second determination model may also be constructed based on a convolutional neural network, for example, the second determination model may also adopt a network structure as shown in fig. 4. Of course, other types of neural network structures may be used, and the embodiments of the present disclosure are not limited thereto.
In practical applications, in order to enable the second determination model to accurately output the second determination result in the training process of generating the antagonistic network with cyclic consistency, it may be understood that the second simulated magnetic resonance image determines that the first noise-carrying generated image is the generated image category as much as possible, in one or more embodiments of the present specification, the optimizing the first determination model of the antagonistic network with cyclic consistency through a fourth loss function specifically includes: and adjusting the model parameters of the second judgment model through a fifth loss function so as to optimize the second judgment model, so that the second judgment model can accurately output a second judgment result.
In one or more embodiments of the present disclosure, the fifth loss function is
Figure BDA0002545153620000112
Figure BDA0002545153620000111
Wherein G is2(y) denotes a first noise-carrying generated image, D2Representing a second discrimination model, D2(G2(y)) represents the second discrimination result, D2(x) And representing the judgment result of whether the noise-added original image is the original image type or not by the second judgment model.
In practical applications, adjusting the model parameters of the first discriminant model by the fourth loss function may include: and calculating a loss value of the judgment result of the second judgment model through a fifth loss function, and feeding the loss value back to the second judgment model in a back propagation mode to adjust the model parameters.
In practical applications, since the input data of the second decision model includes the first noise-carrying generated image, and the first noise-carrying generated image may be generated by the second generation model of the cyclic consistency generation countermeasure network, it may be considered that the second decision model and the second generation model may be trained simultaneously. The training of the second generated model is expected to achieve the purpose that the second discrimination model discriminates the first noise-carrying generated image as the non-generated image type, and the training of the first discrimination model may be expected to improve the accuracy of the model discrimination so as to discriminate the first noise-carrying generated image as much as possible as the generated image type. Because the second judgment result output by the second judgment model can be fed back to the training of the second generation model through the preset loss function, the judgment accuracy of the second judgment model is improved, that is, the accuracy of the output second judgment result is also improved, which is equivalent to the improvement of the quality of the second generation model for generating the noisy image.
In this embodiment of the present specification, whether the generated first noise-carrying generated image is a generated image category may be determined by the second determination model, so as to obtain a second determination result, and then the second determination result is fed back to the training of the cyclic consistency generation countermeasure network through a preset loss function, so as to improve the quality of the second noise-carrying generated image generated by the second generation model of the cyclic consistency generation countermeasure network.
Step 18: and optimizing the cyclic consistency generation countermeasure network according to the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image so as to obtain a trained image noise reduction model.
In practical applications, optimizing the cyclic consistency generation countermeasure network through a loss function may be implemented, and then in one or more embodiments of the present specification, optimizing the cyclic consistency generation countermeasure network according to a first determination result, a second determination result, a first noise-free generated image, a noise-free original image, a second noise-containing generated image, a second noise-free generated image, and a noise-containing original image to obtain a trained image noise reduction model may include: respectively substituting the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image into a preset loss function to obtain a loss value; and according to the loss value, optimizing a first generation model and a second generation model of the recurrent consistency generation antagonistic neural network to obtain a trained image noise reduction model.
In one or more embodiments of the present description, the preset loss function may include a first loss function, a second loss function, a third loss function, and a perceptual loss function; then, substituting the first determination result, the second noise-carrying generated image, the second noise-free generated image, the noise-carrying original image, and the noise-free original image into a preset loss function to obtain a loss value, which may include: substituting the first judgment result into a first loss function to obtain a first loss value; substituting the second judgment result into a second loss function to obtain a second loss value; substituting the second noise-free generated image, the noise-free original image, the second noise-carrying generated image and the noise-carrying original image into a third loss function to obtain a third loss value; and substituting the first noise-free generated image and the noise-free original image into a perception loss function to obtain a fourth loss value.
In the embodiment of the present specification, since the first loss function is to determine a loss value according to a first discrimination result of the first discrimination model, and the second loss function is to determine a loss value according to a second discrimination result of the second discrimination model, if the first generation model and the second generation model are optimized by using only the first loss function and the second loss function, respectively, the neural networks of the two generation models may be deteriorated, and the generation models on both sides may be optimized together by using the third loss function, thereby preventing problems such as disappearance of the gradient or explosion of the gradient of the neural network, which may be caused by antagonistic learning between the generation model and the discrimination model in the model training process.
In practical applications, considering that the first loss function, the second loss function and the third loss function may be established based on a mean square error function, and a single loss using the mean square error function may cause an image after noise reduction to be too smooth and result in loss of image details, in the embodiment of the present specification, by adding the perceptual loss function, it may be avoided that the image is too smooth and result in loss of image details.
In one or more embodiments of the present description, the first loss function is
Figure BDA0002545153620000131
Figure BDA0002545153620000132
The second loss function is
Figure BDA0002545153620000133
Figure BDA0002545153620000134
The third loss function is
Figure BDA0002545153620000135
Figure BDA0002545153620000136
Wherein G is2(G1(x) Denotes a second noise-carrying generated image, G1(G2(y)) represents a second noiseless generated image, | G2(G1(x))-x)‖2Representing the 2-norm, | G, between the second noisy generated image and the noisy original image1(G2(y))-y‖2Representing a 2-norm between the second noise-free generated image and the noise-free original image; the perceptual loss function is
Figure BDA0002545153620000137
Figure BDA0002545153620000138
Wherein the content of the first and second substances,
Figure BDA0002545153620000139
represents the trained deep neural network,
Figure BDA00025451536200001310
a feature map representing a first noise-free generated image extracted by the trained deep neural network,
Figure BDA00025451536200001311
a feature diagram representing a noise-free original image extracted by the trained deep neural network,
Figure BDA00025451536200001312
representing the quadratic power of an F-norm between a feature map of a first noise-free generated image and a feature map of a noise-free original image, wherein d, w and h respectively represent the depth, width and height of the feature map extracted by the trained deep neural network; in one or more embodiments of the present description, then, the predetermined loss function is
Figure BDA00025451536200001313
Figure BDA00025451536200001314
Wherein λ is1And λ2Are respectively used for adjusting L3And LpThe parameters of the ratio.
The trained deep neural network can be used for extracting feature maps of the noise-free original image and the first noise-free generated image. In practical applications, the trained deep neural Network may adopt a trained VGG Network (Visual Geometry Group Network), and of course, other types of neural networks may also be adopted, which does not limit the embodiment of this specification.
In practical applications, optimizing the first generative model and the second generative model for generating the antagonistic network with the cyclic consistency may be implemented by adjusting network parameters of a neural network, and may include: and calculating loss values of output results of the first generating model and the second generating model through a preset loss function, and feeding the loss values back to the models in a back propagation mode to adjust model parameters. In practical application, the network parameters of the loop consistency generation countermeasure network can be optimized through an Adam optimization algorithm.
In practical application, the training effect of the neural network can be measured according to the loss value of the loss function. In the embodiment of the present specification, the training target for the cycle consistency generation countermeasure network may be a loss value that reduces the preset loss function as much as possible. Then, in the iterative training process of generating the countermeasure network by the loop consistency, the trained loop consistency is obtained to generate the countermeasure network when the loss value of the preset loss function is lower than a certain preset value and the loss value of the preset loss function tends to be stable by adjusting the network parameters.
In practical applications, the image is usually denoised by generating a noise-free image from a noisy image, and in the embodiment of the present specification, the trained image denoising model may be the first generation model of the trained cyclic consistency generation countermeasure network.
In the embodiments of the present specification, the generation of the countermeasure network is performed by cyclic consistency, and during the training, not only the mapping relationship from the noisy image to the non-noisy image in the noise-reduced image generation direction is learned, but also the mapping relationship from the non-noisy image to the noisy image in the opposite direction is learned due to the addition of the second generation model in the opposite direction to the noise-reduced image generation direction, so that the mapping relationship learned by the first generation model can be corrected, so that the first generation model can form a correct mapping in the desired noise-reduced image generation direction, reduce the deviation between the generated non-noisy image and the actual non-noisy image, improve the quality of the noise-reduced image generated by the image noise-reduced model, and improve the learning ability and learning efficiency of the neural network.
On the other hand, the generated first noise-free generated image and the generated first noise-carrying generated image are further distinguished through the first distinguishing model and the second distinguishing model, and the distinguishing result is fed back to the optimization training of the cycle consistency generation countermeasure network. It is desirable to achieve the purpose of making the image generated by the first discrimination model as the non-generated image type by mistake, so that the noise-free image generated by the trained image noise reduction model is close to the actual noise-free image as much as possible, thereby further improving the quality of the noise-free image generated by the image noise reduction model.
The above is a training method of an image denoising model provided in an embodiment of the present specification, and an embodiment of the present specification further provides a specific application scenario of an image denoising model obtained by training the training method of the image denoising model. The specific application scenario may be an image denoising method provided in an embodiment of the present specification, and as shown in fig. 5, the image denoising method may include the following steps:
step 21, acquiring a noisy image.
In practical applications, the noisy image data here may be a noisy image acquired by an image acquisition apparatus at the time of acquiring the image data. In some cases, the image acquisition device may be a magnetic resonance imaging device, and the acquired noisy image may comprise a noisy undersampled magnetic resonance image acquired by the magnetic resonance imaging device.
And step 22, inputting the image with the noise obtained in the step 21 into a trained image noise reduction model to generate a noise-reduced image.
In this embodiment, the trained image noise reduction model may be obtained by, but is not limited to, training by using the training method of the image noise reduction model in the above embodiment. For the related description of the training method for the image noise reduction model, reference may be made to the contents shown in the embodiments of the present specification, and for avoiding redundancy, the description is not repeated here.
In practical application, the undersampled image data is input into a trained image noise reduction model, and a noise-free image can be generated through the trained image noise reduction model. In the field of nuclear magnetic resonance imaging, a noise-free magnetic resonance image can be generated from a noisy magnetic resonance image acquired by nuclear magnetic resonance imaging equipment, so that clinical diagnosis is facilitated.
By the image denoising method in the embodiment of the present specification, a high-quality noise-free image can be generated according to the acquired image data with noise, and particularly, compared with the prior art, the image denoising model obtained by training by the training method of the image denoising model in the embodiment of the present specification can further improve the quality of generating the noise-free image.
The above is a training method of an image denoising model provided in the embodiments of the present specification, and an image denoising method based on the training method of the image denoising model. In the embodiment of the specification, based on the same inventive concept as the training method of the image noise reduction model, a corresponding training device of the image noise reduction model is also provided. As shown in fig. 6, the apparatus specifically includes:
an obtaining module 101, configured to obtain an image sample set; wherein the image sample set comprises: a noisy original image and a noise-free original image corresponding to the noisy original image;
a first generating module 102, configured to generate a first generating model of a countermeasure network from input cycle consistency of the noisy original image acquired by the acquiring module 101, and generate a first noise-free generated image;
a second generating module 103, configured to input the first noise-free generated image generated by the first generating module 102 into a second generation model of the cyclic consistency generation countermeasure network, and generate a second noise-containing generated image;
a third generating module 104, configured to input the noise-free original image acquired by the acquiring module 101 into the second generation model of the cyclic consistency generation countermeasure network, and generate a first noise-carrying generated image;
a fourth generating module 105, configured to input the first noise-carrying generated image generated by the third generating module 104 into the first generation model of the cyclic consistency generation countermeasure network, and generate a second noise-free generated image;
a first determining module 106, configured to input the first noise-free generated image generated by the first generating module 102 and the noise-free original image acquired by the acquiring module 101 into the first determining model of the cyclic consistency generation countermeasure network, so as to obtain a first determination result of whether the first noise-free generated image is generated into an image category;
a second judging module 107, configured to input the first noise-carrying generated image generated by the third generating module 104 and the noise-carrying original image acquired by the acquiring module 101 into the second judging model of the cyclic consistency generation countermeasure network, so as to obtain a second judging result of whether the first noise-carrying generated image is generated into an image category;
an optimizing module 108, configured to optimize the generation of the confrontation network for the cycle consistency according to the first determination result obtained by the first determining module 106, the second determination result obtained by the second determining module 107, the first noise-free generated image generated by the first generating module 102, the noise-free original image obtained by the obtaining module 101, the second noise-containing generated image generated by the second generating module 103, the second noise-free generated image generated by the fourth generating module 105, and the noise-containing original image obtained by the obtaining module 101, so as to obtain a trained image noise reduction model.
The specific workflow of the above device embodiment may include: an obtaining module 101, which obtains an image sample set; the first generation module 102 is used for inputting the noisy original images into a cycle consistency to generate a first generation model of a countermeasure network and generate a first noise-free generated image; a second generation module 103, which inputs the first noise-free generated image into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-containing generated image; a third generation module 104, inputting the noise-free original image into the second generation model of the cyclic consistency generation countermeasure network, and generating a first noise-carrying generated image; a fourth generation module 105, which inputs the first noise-carrying generated image into the first generation model of the cyclic consistency generation countermeasure network to generate a second noise-free generated image; a first judging module 106, configured to input the first noise-free generated image and the noise-free original image into the first judging model of the cyclic consistency generation countermeasure network, and generate a first judging result of whether the first noise-free generated image generates an image category; a second judging module 107, configured to input the first noise-carrying generated image and the noise-carrying original image into a second judging model of the cyclic consistency generation countermeasure network, so as to obtain a second judging result of whether the first noise-carrying generated image generates a generated image category; and the optimization module 108 is configured to optimize the cyclic consistency generation countermeasure network according to the first determination result, the second determination result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image, and the noise-carrying original image, so as to obtain a trained image noise reduction model.
In an embodiment, the optimization module 108 specifically includes:
a calculating unit, configured to substitute the first determination result, the second determination result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image, and the noise-carrying original image into a preset loss function respectively to obtain a loss value;
and the optimization unit is used for optimizing the first generation model and the second generation model of the cyclic consistency generation countermeasure network according to the loss value so as to obtain a trained image noise reduction model.
In one embodiment, the preset loss function includes a first loss function, a second loss function, a third loss function, and a perceptual loss function; the calculating unit is specifically configured to substitute the first determination result into the first loss function to obtain a first loss value; substituting the second judgment result into the second loss function to obtain a second loss value; substituting the second noise-free generated image, the noise-free original image, the second noise-carrying generated image and the noise-carrying original image into the third loss function to obtain a third loss value; and substituting the first noise-free generated image and the noise-free original image into the perception loss function to obtain a fourth loss value.
In one embodiment, the first loss function is
Figure BDA0002545153620000181
Figure BDA0002545153620000182
Wherein x represents the noisy original image, G1(x) Representing said first noise-free generated image, D1Representing said first discriminant model, D1(G1(x) Denotes the first discrimination result, ExA mathematical expectation of a function representing an input as a noisy original image;
the second loss function is
Figure BDA0002545153620000183
Figure BDA0002545153620000184
Wherein y represents the noise-free original image, G2(y) represents the first noise-bearing generated image, D2Representing said second decision model, D2(G2(y)) represents the second determination result, EyA mathematical expectation of a function representing an input as a noise-free raw image;
the third loss function is
Figure BDA0002545153620000185
Figure BDA0002545153620000186
Figure BDA0002545153620000187
Wherein G is2(G1(x) Representing the second noisy generated image,G1(G2(y)) represents the second noiseless generated image, | G2(G1(x))-x)‖2Representing a 2-norm, | G, between the second noisy generated image and the noisy original image1(G2(y))-y‖2Representing a 2-norm between the second noise-free generated image and the noise-free original image;
the perceptual loss function is
Figure BDA0002545153620000191
Figure BDA0002545153620000192
Wherein the content of the first and second substances,
Figure BDA0002545153620000193
represents the trained deep neural network,
Figure BDA0002545153620000194
a feature map representing the first noise-free generated image extracted by the trained deep neural network,
Figure BDA0002545153620000195
a feature map representing the noise-free original image extracted by the trained deep neural network,
Figure BDA0002545153620000196
representing the power of an F-norm between the feature map of the first noise-free generated image and the feature map of the noise-free original image, wherein d, w and h respectively represent the depth, width and height of the feature map extracted by the trained deep neural network;
then, the predetermined loss function is
Figure BDA0002545153620000197
Figure BDA0002545153620000198
Wherein λ is1And λ2Are respectively used for adjusting L3And LpThe parameters of the ratio.
In one embodiment, the first generation model comprises: a first convolutional network layer and a first deconvolution network layer; the second generative model comprises: a second convolutional network layer and a second deconvolution network layer; the first convolution network layer and the first deconvolution network layer are connected in a jumping mode; the second convolutional network layer and the second deconvolution network layer are connected by adopting jumping.
In one embodiment, the apparatus further comprises:
the first discriminant model optimization module is used for optimizing a first discriminant model of the reactive network generated by the cycle consistency through a fourth loss function;
and the second judgment model optimization module is used for optimizing a second judgment model of the countermeasure network generated by the cycle consistency through a fifth loss function.
In one embodiment, the fourth loss function is
Figure BDA0002545153620000199
Figure BDA00025451536200001910
Figure BDA00025451536200001911
The fifth loss function is
Figure BDA00025451536200001912
Figure BDA00025451536200001913
Figure BDA00025451536200001914
Wherein D is1(y) a discrimination result indicating whether the noise-free original image is an original image category or not by the first discrimination model; d2(x) Representing whether the noise original image is original or not by the second judgment modelAnd judging the image type.
In the embodiments of the present specification, the generation of the countermeasure network is performed by cyclic consistency, and during the training, not only the mapping relationship from the noisy image to the non-noisy image in the noise-reduced image generation direction is learned, but also the mapping relationship from the non-noisy image to the noisy image in the opposite direction is learned due to the addition of the second generation model in the opposite direction to the noise-reduced image generation direction, so that the mapping relationship learned by the first generation model can be corrected, so that the first generation model can form a correct mapping in the desired noise-reduced image generation direction, reduce the deviation between the generated non-noisy image and the actual non-noisy image, improve the quality of the noise-reduced image generated by the image noise-reduced model, and improve the learning ability and learning efficiency of the neural network.
On the other hand, the generated first noise-free generated image and the generated first noise-carrying generated image are further distinguished through the first distinguishing model and the second distinguishing model, and the distinguishing result is fed back to the optimization training of the cycle consistency generation countermeasure network. It is desirable to achieve the purpose of making the image generated by the first discrimination model as the non-generated image type by mistake, so that the noise-free image generated by the trained image noise reduction model is close to the actual noise-free image as much as possible, thereby further improving the quality of the noise-free image generated by the image noise reduction model.
Based on the same inventive concept as the image denoising method, an embodiment of the present specification further provides an image denoising device, as shown in fig. 7, the device specifically includes:
an image acquisition module 201, configured to acquire a noisy image;
and the noise reduction module 202 is configured to input the image with noise into a trained image noise reduction model to generate a noise-reduced image.
The specific workflow of the magnetic resonance image generation apparatus embodiment may include: an image acquisition module 201 that acquires a noisy image; and the noise reduction module 202 is used for inputting the image with the noise into a trained image noise reduction model so as to generate a noise-reduced image.
By the image noise reduction device in the embodiment of the present specification, noise reduction can be performed on a noisy image through an image noise reduction model, so as to generate a high-quality noiseless image, and particularly, compared with the prior art, the quality of the generated noiseless image can be further improved by using the image noise reduction model obtained by training through the training method of the image noise reduction model in the embodiment of the present specification.
An embodiment of the present specification further provides an electronic device, and referring to fig. 8, in a hardware level, the electronic device includes a processor 601, and optionally further includes an internal bus, a network interface 602, and a memory. The Memory may include a Memory 603, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory 604, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor 601, the network interface 602, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but does not indicate only one bus and one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include a memory 603 and a non-volatile storage 604 and provide instructions and data to the processor 601.
The processor 601 reads a corresponding computer program from the non-volatile memory 604 into the memory 603 and then runs the computer program to form a training apparatus applying the image noise reduction model on a logical level. A processor 601, executing the program stored in the memory, and configured to perform at least the following operations:
acquiring an image sample set; wherein the image sample set comprises: a noise-free original image and a noise-containing original image corresponding to the noise-free original image;
inputting the noisy original image into a cycle consistency to generate a first generation model of a countermeasure network, and generating a first noise-free generation image;
inputting the first noise-free generated image into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-carrying generated image;
inputting the noise-free original image into a second generation model of the cyclic consistency generation countermeasure network to generate a first noise-carrying generated image;
inputting the first noise-carrying generated image into a first generation model of the cyclic consistency generation countermeasure network to generate a second noise-free generated image;
inputting the first noise-free generated image and the noise-free original image into a first discrimination model of the cyclic consistency generation countermeasure network to obtain a first discrimination result of whether the first noise-free generated image generates an image category;
inputting the first noise-carrying generated image and the noise-carrying original image into a second judgment model of the cyclic consistency generation countermeasure network to obtain a second judgment result of whether the first noise-carrying generated image generates an image category or not;
and optimizing the loop consistency generation countermeasure network according to the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image so as to obtain a trained image noise reduction model.
The method performed by the training apparatus for image noise reduction model disclosed in the embodiment of fig. 1 in this specification can be applied to the processor 601, or implemented by the processor 601. The processor 601 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware and instructions in the form of software in the processor 601. The Processor 601 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) and other Programmable logic devices, discrete gates and transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented and executed. A general purpose processor may be a microprocessor and the processor may be any conventional processor or the like. The steps of the methods disclosed in connection with the embodiments of the present specification may be embodied directly in the hardware decoding processor or in a combination of the hardware and software modules included in the decoding processor. The software modules may be located in ram, flash memory, rom, prom, eprom, registers, etc. storage media as are known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the training apparatus for the image noise reduction model in fig. 1, and implement the functions of the training apparatus for the image noise reduction model in the embodiment shown in fig. 1, which are not described herein again in this specification.
The present specification further proposes a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the method performed by the training apparatus for an image noise reduction model in the embodiment shown in fig. 1, and at least to perform:
acquiring an image sample set; wherein the image sample set comprises: a noise-free original image and a noise-containing original image corresponding to the noise-free original image;
inputting the noisy original image into a cycle consistency to generate a first generation model of a countermeasure network, and generating a first noise-free generation image;
inputting the first noise-free generated image into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-carrying generated image;
inputting the noise-free original image into a second generation model of the cyclic consistency generation countermeasure network to generate a first noise-carrying generated image;
inputting the first noise-carrying generated image into a first generation model of the cyclic consistency generation countermeasure network to generate a second noise-free generated image;
inputting the first noise-free generated image and the noise-free original image into a first discrimination model of the cyclic consistency generation countermeasure network to obtain a first discrimination result of whether the first noise-free generated image generates an image category;
inputting the first noise-carrying generated image and the noise-carrying original image into a second judgment model of the cyclic consistency generation countermeasure network to obtain a second judgment result of whether the first noise-carrying generated image generates an image category or not;
and optimizing the loop consistency generation countermeasure network according to the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image so as to obtain a trained image noise reduction model.
As will be appreciated by one skilled in the art, embodiments of the present description embodiments may be provided as methods, systems, and computer program products. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment, and an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or flowchart block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) and flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, and other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory and other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) and other optical storage, magnetic cassettes, magnetic tape magnetic disk storage and other magnetic storage devices, and any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," and any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, elements recited by the phrase "comprising an … …" do not exclude the presence of additional like elements in the processes, methods, articles, and devices that comprise the elements.
The above description is only an example of the embodiments of the present disclosure, and is not intended to limit the embodiments of the present disclosure. Various modifications and variations to the embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present specification should be included in the scope of the claims of the embodiments of the present specification.

Claims (10)

1. A training method of an image noise reduction model is characterized by comprising the following steps:
acquiring an image sample set; wherein the image sample set comprises: a noise-free original image and a noise-containing original image corresponding to the noise-free original image;
inputting the noisy original image into a cycle consistency to generate a first generation model of a countermeasure network, and generating a first noise-free generation image;
inputting the first noise-free generated image into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-carrying generated image;
inputting the noise-free original image into a second generation model of the cyclic consistency generation countermeasure network to generate a first noise-carrying generated image;
inputting the first noise-carrying generated image into a first generation model of the cyclic consistency generation countermeasure network to generate a second noise-free generated image;
inputting the first noise-free generated image and the noise-free original image into a first discrimination model of the cyclic consistency generation countermeasure network to obtain a first discrimination result of whether the first noise-free generated image generates an image category;
inputting the first noise-carrying generated image and the noise-carrying original image into a second judgment model of the cyclic consistency generation countermeasure network to obtain a second judgment result of whether the first noise-carrying generated image generates an image category or not;
and optimizing the loop consistency generation countermeasure network according to the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image so as to obtain a trained image noise reduction model.
2. The method of claim 1, wherein optimizing the cyclic consistency generation countermeasure network according to the first decision result, the second decision result, the first noise-free generated image, the noise-free raw image, the second noise-containing generated image, the second noise-free generated image, and the noise-containing raw image to obtain a trained image noise reduction model comprises:
respectively substituting the first judgment result, the second judgment result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image and the noise-carrying original image into a preset loss function to obtain a loss value;
and optimizing a first generation model and a second generation model of the cyclic consistency generation countermeasure network according to the loss value to obtain a trained image noise reduction model.
3. The method of claim 1, wherein the preset loss function comprises a first loss function, a second loss function, a third loss function, and a perceptual loss function; then, substituting the first determination result, the second determination result, the first noise-free generated image, the noise-free original image, the second noise-carrying generated image, the second noise-free generated image, and the noise-carrying original image into a preset loss function respectively to obtain a loss value, including:
substituting the first judgment result into the first loss function to obtain a first loss value;
substituting the second judgment result into the second loss function to obtain a second loss value;
substituting the second noise-free generated image, the noise-free original image, the second noise-carrying generated image and the noise-carrying original image into the third loss function to obtain a third loss value;
and substituting the first noise-free generated image and the noise-free original image into the perception loss function to obtain a fourth loss value.
4. The method of claim 3,
the first loss function is
Figure FDA0002545153610000021
Figure FDA0002545153610000022
Wherein x represents the noisy original image, G1(x) Representing said first noise-free generated image, D1Representing said first discriminant model, D1(G1(x) Denotes the first discrimination result, ExA mathematical expectation of a function representing an input as a noisy original image;
the second loss function is
Figure FDA0002545153610000023
Figure FDA0002545153610000024
Wherein y represents the noise-free original image, G2(y) represents the first noise-bearing generated image, D2Representing said second decision model, D2(G2(y)) represents the second determination result, EyA mathematical expectation of a function representing an input as a noise-free raw image;
the third loss function is
Figure FDA0002545153610000031
Figure FDA0002545153610000032
Figure FDA0002545153610000033
Wherein G is2(G1(x) Representing said second noisy generated image, G1(G2(y)) represents the second noiseless generated image, | G2(G1(x))-x)‖2Representing 2-Norm, | G1(G2(y))-y‖2Representing a 2-norm between the second noise-free generated image and the noise-free original image;
the perceptual loss function is
Figure FDA0002545153610000034
Figure FDA0002545153610000035
Wherein the content of the first and second substances,
Figure FDA0002545153610000036
represents the trained deep neural network,
Figure FDA0002545153610000037
a feature map representing the first noise-free generated image extracted by the trained deep neural network,
Figure FDA0002545153610000038
a feature map representing the noise-free original image extracted by the trained deep neural network,
Figure FDA0002545153610000039
representing the power of an F-norm between the feature map of the first noise-free generated image and the feature map of the noise-free original image, wherein d, w and h respectively represent the depth, width and height of the feature map extracted by the trained deep neural network;
then, the predetermined loss function is
Figure FDA00025451536100000310
Figure FDA00025451536100000311
Wherein λ is1And λ2Are respectively used for adjusting L3And LpThe parameters of the ratio.
5. The method of claim 1, wherein the first generation model comprises: a first convolutional network layer and a first deconvolution network layer; the second generative model comprises: a second convolutional network layer and a second deconvolution network layer; the first convolution network layer and the first deconvolution network layer are connected in a jumping mode; the second convolutional network layer and the second deconvolution network layer are connected by adopting jumping.
6. The method of claim 1, wherein the method further comprises: generating a first discriminant model of the antagonistic network for the cycle consistency through a fourth loss function to optimize; optimizing a second judgment model of the countermeasure network generated by the cycle consistency through a fifth loss function;
the fourth loss function is
Figure FDA00025451536100000312
Figure FDA00025451536100000313
The fifth loss function is
Figure FDA00025451536100000314
Figure FDA00025451536100000315
Wherein x represents the noisy original image, G1(x) Representing said first noise-free generated image, D1Representing said first discriminant model, D1(G1(x) Denotes the first discrimination result, ExMathematical expectation of a function representing the input as a noisy original image, y representing said noise-free original image, G2(y) represents the first noise-bearing generated image, D2Representing said second decision model, D2(G2(y)) represents the second determination result, EyMathematical expectation of a function representing an input as a noise-free raw image, D1(y) a result of discrimination of the noise-free original image by the first discrimination model as to whether or not the noise-free original image is of an original image type, D2(x) And representing the judgment result of the second judgment model on whether the original image with the noise is the original image category.
7. An image denoising method based on the training method of the image denoising model according to claim 1, wherein the image denoising method comprises:
acquiring a noisy image;
and inputting the image with the noise into a trained image noise reduction model to generate a noise-reduced image.
8. An apparatus for training an image noise reduction model, comprising:
the acquisition module is used for acquiring an image sample set; wherein the image sample set comprises: a noisy original image and a noise-free original image corresponding to the noisy original image;
the first generation module is used for inputting the noisy original images acquired by the acquisition module into a cycle consistency to generate a first generation model of a countermeasure network and generate a first noise-free generated image;
the second generation module is used for inputting the first noise-free generated image generated by the first generation module into a second generation model of the cyclic consistency generation countermeasure network to generate a second noise-carrying generated image;
the third generation module is used for inputting the noise-free original image acquired by the acquisition module into the second generation model of the cyclic consistency generation countermeasure network to generate a first noise-carrying generated image;
a fourth generation module, configured to input the first noise-carrying generated image generated by the third generation module into the first generation model of the cyclic consistency generation countermeasure network, and generate a second noise-free generated image;
a first judging module, configured to input the first noise-free generated image generated by the first generating module and the noise-free original image acquired by the acquiring module into the first judging model of the cyclic consistency generation countermeasure network, so as to obtain a first judging result of whether the first noise-free generated image generates an image category;
a second judging module, configured to input the first noise-carrying generated image generated by the third generating module and the noise-carrying original image acquired by the acquiring module into the second judging model of the cyclic consistency generation countermeasure network, so as to obtain a second judging result of whether the first noise-carrying generated image generates an image category;
and the optimization module is used for optimizing the cyclic consistency generation countermeasure network according to a first discrimination result obtained by the first discrimination module, a second discrimination result obtained by the second discrimination module, a first noise-free generated image generated by the first generation module, a noise-free original image obtained by the acquisition module, a second noise-containing generated image generated by the second generation module, a second noise-free generated image generated by the fourth generation module and a noise-containing original image obtained by the acquisition module, so as to obtain a trained image noise reduction model.
9. An image noise reduction apparatus based on the training apparatus of image noise reduction model according to claim 8, wherein the image noise reduction apparatus comprises:
the image acquisition module is used for acquiring a noisy image;
and the noise reduction module is used for inputting the image with the noise into a trained image noise reduction model so as to generate a noise-reduced image.
10. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the training method of an image noise reduction model according to any of claims 1 to 7.
CN202010557928.5A 2020-06-18 2020-06-18 Training method and device of image noise reduction model, electronic equipment and storage medium Pending CN111899185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010557928.5A CN111899185A (en) 2020-06-18 2020-06-18 Training method and device of image noise reduction model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010557928.5A CN111899185A (en) 2020-06-18 2020-06-18 Training method and device of image noise reduction model, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111899185A true CN111899185A (en) 2020-11-06

Family

ID=73206815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010557928.5A Pending CN111899185A (en) 2020-06-18 2020-06-18 Training method and device of image noise reduction model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111899185A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734669A (en) * 2021-01-07 2021-04-30 苏州浪潮智能科技有限公司 Training method of anomaly detection model based on improved noise reduction self-encoder
CN113362258A (en) * 2021-07-08 2021-09-07 南方科技大学 Method, device, equipment and medium for denoising fundus color photograph image of cataract patient
CN113516238A (en) * 2020-11-25 2021-10-19 阿里巴巴集团控股有限公司 Model training method, denoising method, model, device and storage medium
CN113744158A (en) * 2021-09-09 2021-12-03 讯飞智元信息科技有限公司 Image generation method and device, electronic equipment and storage medium
CN113781352A (en) * 2021-09-16 2021-12-10 科大讯飞股份有限公司 Light removal method and device, electronic equipment and storage medium
CN117710249A (en) * 2023-12-14 2024-03-15 赛昇数字经济研究中心(深圳)有限公司 Image video generation method and device for interactive dynamic fuzzy scene recovery

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138838A1 (en) * 2017-11-09 2019-05-09 Boe Technology Group Co., Ltd. Image processing method and processing device
CN110223254A (en) * 2019-06-10 2019-09-10 大连民族大学 A kind of image de-noising method generating network based on confrontation
CN110390647A (en) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 The OCT image denoising method and device for generating network are fought based on annular
CN110503616A (en) * 2019-08-28 2019-11-26 上海海事大学 A kind of production network applied to picture denoising
CN110634108A (en) * 2019-08-30 2019-12-31 北京工业大学 Composite degraded live webcast video enhancement method based on element-cycle consistency countermeasure network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138838A1 (en) * 2017-11-09 2019-05-09 Boe Technology Group Co., Ltd. Image processing method and processing device
CN110223254A (en) * 2019-06-10 2019-09-10 大连民族大学 A kind of image de-noising method generating network based on confrontation
CN110390647A (en) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 The OCT image denoising method and device for generating network are fought based on annular
CN110503616A (en) * 2019-08-28 2019-11-26 上海海事大学 A kind of production network applied to picture denoising
CN110634108A (en) * 2019-08-30 2019-12-31 北京工业大学 Composite degraded live webcast video enhancement method based on element-cycle consistency countermeasure network

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516238A (en) * 2020-11-25 2021-10-19 阿里巴巴集团控股有限公司 Model training method, denoising method, model, device and storage medium
CN112734669A (en) * 2021-01-07 2021-04-30 苏州浪潮智能科技有限公司 Training method of anomaly detection model based on improved noise reduction self-encoder
CN112734669B (en) * 2021-01-07 2022-12-02 苏州浪潮智能科技有限公司 Training method of anomaly detection model based on improved noise reduction self-encoder
CN113362258A (en) * 2021-07-08 2021-09-07 南方科技大学 Method, device, equipment and medium for denoising fundus color photograph image of cataract patient
CN113362258B (en) * 2021-07-08 2022-10-21 南方科技大学 Method, device, equipment and medium for denoising fundus color photograph image of cataract patient
CN113744158A (en) * 2021-09-09 2021-12-03 讯飞智元信息科技有限公司 Image generation method and device, electronic equipment and storage medium
CN113781352A (en) * 2021-09-16 2021-12-10 科大讯飞股份有限公司 Light removal method and device, electronic equipment and storage medium
CN117710249A (en) * 2023-12-14 2024-03-15 赛昇数字经济研究中心(深圳)有限公司 Image video generation method and device for interactive dynamic fuzzy scene recovery

Similar Documents

Publication Publication Date Title
CN111899185A (en) Training method and device of image noise reduction model, electronic equipment and storage medium
WO2021253316A1 (en) Method and apparatus for training image noise reduction model, electronic device, and storage medium
US20200210708A1 (en) Method and device for video classification
CN111476719B (en) Image processing method, device, computer equipment and storage medium
CN109859113B (en) Model generation method, image enhancement method, device and computer-readable storage medium
CN111358430B (en) Training method and device for magnetic resonance imaging model
CN112580668B (en) Background fraud detection method and device and electronic equipment
WO2022067874A1 (en) Training method and apparatus for image data augmentation network, and storage medium
CN115631112B (en) Building contour correction method and device based on deep learning
CN111223061A (en) Image correction method, correction device, terminal device and readable storage medium
CN113505848A (en) Model training method and device
JP2023515367A (en) Out-of-distribution detection of input instances to model
CN110874855B (en) Collaborative imaging method and device, storage medium and collaborative imaging equipment
GB2587833A (en) Image modification styles learned from a limited set of modified images
CN112070853A (en) Image generation method and device
CN110310314A (en) Method for registering images, device, computer equipment and storage medium
CN116543246A (en) Training method of image denoising model, image denoising method, device and equipment
CN112232361B (en) Image processing method and device, electronic equipment and computer readable storage medium
Stefaniga et al. Performance analysis of morphological operation in cpu and gpu for medical images
CN113919476A (en) Image processing method and device, electronic equipment and storage medium
Yang et al. An end‐to‐end perceptual enhancement method for UHD portrait images
CN117440104B (en) Data compression reconstruction method based on target significance characteristics
US20230298326A1 (en) Image augmentation method, electronic device and readable storage medium
CN114972090B (en) Training method of image processing model, image processing method and device
CN109741274A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination