WO2022126480A1 - Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein - Google Patents

Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein Download PDF

Info

Publication number
WO2022126480A1
WO2022126480A1 PCT/CN2020/137188 CN2020137188W WO2022126480A1 WO 2022126480 A1 WO2022126480 A1 WO 2022126480A1 CN 2020137188 W CN2020137188 W CN 2020137188W WO 2022126480 A1 WO2022126480 A1 WO 2022126480A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
energy image
image
network model
generative adversarial
Prior art date
Application number
PCT/CN2020/137188
Other languages
English (en)
Chinese (zh)
Inventor
郑海荣
胡战利
梁栋
刘新
周豪杰
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/137188 priority Critical patent/WO2022126480A1/fr
Publication of WO2022126480A1 publication Critical patent/WO2022126480A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present application relates to the technical field of image processing, and in particular, to a high-energy image synthesis method and device based on the Wasserstein generative confrontation network model.
  • Dual-energy Computed Tomography has gradually become a more effective non-invasive diagnostic method, which can be applied to traditional computed tomography, which uses two different energies of x-rays.
  • traditional computed tomography which uses two different energies of x-rays.
  • the obtained dataset has richer scanning information, which can be applied to more clinical applications, such as urinary tract stone detection, tophi detection, and bone and metal artifact removal.
  • the scanning mode of dual-energy computed tomography can use half of the low-energy scan to replace the original high-energy scan, the radiation dose can also be reduced.
  • the embodiment of the present application provides a high-energy image synthesis method and device based on the Wasserstein generative adversarial network model, so as to solve the problems of large interference deviation and poor image quality in the prior art when the dual-energy CT method is used to obtain CT images. .
  • this embodiment provides a high-energy image synthesis method based on a Wasserstein generative adversarial network model, including: acquiring a low-energy image to be synthesized; inputting the low-energy image to be synthesized into a pre-trained Wasserstein generative adversarial network model to obtain The synthesized target high-energy image; the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images and a preset loss function, and is obtained by training a preset generative adversarial network model.
  • the Wasserstein generative adversarial network model includes a generator network and a discriminator network.
  • the generator network is used to extract the image features of the low-energy images to be synthesized, and synthesize high-energy images based on the image features; the discriminator network is used to judge the high-energy images synthesized by the generator network and perform reverse adjustment training; preset loss function , at least established from the loss function used to reduce image noise and remove image artifacts.
  • the loss function used to reduce image noise and remove image artifacts based on the gradient of the standard high-energy image in the x direction, the gradient of the standard high-energy image in the y direction, the gradient of the synthetic high-energy image in the x direction, and the synthetic high-energy image.
  • the gradient in the y direction is established.
  • the preset loss function is further established according to at least one of the following loss functions: a preset pixel difference calibration function for calibrating the pixel difference between the synthesized high-energy image and the standard high-energy image; a preset pixel difference calibration function for calibrating the synthesized high-energy image A preset structural loss function for the structural information difference between the high-energy image and the standard high-energy image; a preset multi-scale feature loss function for calibrating the texture information difference between the synthesized high-energy image and the standard high-energy image.
  • the preset loss function is established according to a preset gradient loss function, a preset pixel difference calibration function, a preset structural loss function, a preset multi-scale feature loss function, and a preset generative adversarial network model.
  • the method before inputting the low-energy image to be synthesized into the Wasserstein generative adversarial network model obtained by pre-training to obtain the synthesized target high-energy image, the method further includes: based on the low-energy image sample, the standard high-energy image and the preset loss function, through The Wasserstein generative adversarial network model is obtained by training the preset generative adversarial network model; wherein, based on low-energy image samples, standard high-energy images and a preset loss function, the Wasserstein generative adversarial network model is obtained by training the preset generative adversarial network model, including: The sample is input to the generator network of the preset generative adversarial network model to obtain a synthesized first high-energy image; the first high-energy image is input to the discriminator network of the preset generative adversarial network model to obtain a first discrimination result; based on the first high-energy image For the image and the standard high-energy image, the first loss value is calculated according to the preset loss function, and the first loss value is
  • the preset loss function includes a preset pixel difference calibration function
  • calculating and obtaining the first loss value according to the preset loss function including: using the preset pixel difference calibration function, Calculate the pixel difference value between the first high-energy image and the standard high-energy image; determine the pixel difference value as the first loss value.
  • the preset loss function includes a preset structural loss function
  • the first loss value is calculated and obtained according to the preset loss function, including: by using the preset structural loss function, A structural difference value between the first high-energy image and the standard high-energy image is determined; the structural difference value is determined as a first loss value.
  • the preset loss function includes a preset multi-scale feature loss function
  • the first loss value is calculated and obtained according to the preset loss function, including: using the preset multi-scale feature loss function to determine the texture information difference value between the first high-energy image and the standard high-energy image; and determine the texture information difference value as the first loss value.
  • the generator network of the Wasserstein generative adversarial network model includes a semantic segmentation network with 4 layers of encoders and decoders. Each layer of encoders and decoders is connected by a skip link, and the encoding layer and the decoding layer of the semantic segmentation network include 9 layers of Residual network;
  • the discriminator network of the Wasserstein generative adversarial network model includes 8 groups of 3*3 convolutional layers and activation function LReLU; among them, the convolutional layers and activation function LReLU convolution steps located in the singular position from left to right The length is 1, and the convolution stride of the convolutional layer at the even position and the activation function LReLU is 2.
  • this embodiment also provides a high-energy image synthesis device based on the Wasserstein generative adversarial network model, including an acquisition module and an input module, wherein: the acquisition module is used to acquire the low-energy image to be synthesized; the input module is used to combine The low-energy image to be synthesized is input into the pre-trained Wasserstein generative adversarial network model to obtain the synthesized target high-energy image; the Wasserstein generative adversarial network model is trained by the preset generative adversarial network model learning method; Wasserstein generative adversarial network model Based on low-energy image samples, standard high-energy images and preset loss functions, it is obtained by training a preset generative adversarial network model.
  • the Wasserstein generative adversarial network model includes a generator network and a discriminator network.
  • the generator network is used to extract the low-energy images to be synthesized. image features, and synthesize high-energy images based on image features;
  • the discriminator network is used to judge the high-energy images synthesized by the generator network, and perform reverse adjustment training; preset loss functions, at least according to the image noise reduction and image removal A loss function for the artifact is built.
  • the loss function used to reduce image noise and remove image artifacts can be based on the gradient of the standard high-energy image in the x-direction, the gradient of the standard high-energy image in the y-direction, the gradient of the synthetic high-energy image in the x-direction, and the synthetic high-energy image.
  • the gradient of the image in the y direction is built.
  • the preset loss function is further established according to at least one of the following loss functions: a preset pixel difference calibration function for calibrating the pixel difference between the synthesized high-energy image and the standard high-energy image; a preset pixel difference calibration function for calibrating the synthesized high-energy image A preset structural loss function for the structural information difference between the high-energy image and the standard high-energy image; a preset multi-scale feature loss function for calibrating the texture information difference between the synthesized high-energy image and the standard high-energy image.
  • the preset loss function is established according to a preset gradient loss function, a preset pixel difference calibration function, a preset structural loss function, a preset multi-scale feature loss function, and a preset generative adversarial network model.
  • the device further includes: a training module for obtaining a Wasserstein generative adversarial network model by training a preset generative adversarial network model based on low-energy image samples, standard high-energy images and a preset loss function; wherein, the training module includes: The first input unit is used to input the low-energy image samples into the generator network of the preset generative adversarial network model to obtain the synthesized first high-energy image; the second input unit is used to input the first high-energy image to the preset generative adversarial network.
  • the discriminator network of the network model obtains the first discrimination result; the computing unit is used for calculating the first loss value according to the preset loss function based on the first high-energy image and the standard high-energy image, and the first loss value is used to update the preset generation parameters of the adversarial network model until the preset generative adversarial network converges; the updating unit is used to update the preset generative adversarial network model based on the first loss value and the first discrimination result until the preset generative adversarial network model converges, and after the convergence
  • the preset generative adversarial network model is determined to be the Wasserstein generative adversarial network model.
  • the calculation unit is configured to: calculate the pixel difference value between the first high-energy image and the standard high-energy image by using the preset pixel difference calibration function; The difference value is determined as the first loss value.
  • the computing unit is configured to: determine the structural difference value between the first high-energy image and the standard high-energy image by using the preset structural loss function; The difference value is determined as the first loss value.
  • the computing unit is configured to: determine the texture information difference value between the first high-energy image and the standard high-energy image by using the preset multi-scale feature loss function ; Determine the texture information difference value as the first loss value.
  • the generator network of the Wasserstein generative adversarial network model includes a semantic segmentation network with 4 layers of encoders and decoders. Each layer of encoders and decoders is connected by a skip link, and the encoding layer and the decoding layer of the semantic segmentation network include 9 layers of Residual network;
  • the discriminator network of the Wasserstein generative adversarial network model includes 8 groups of 3*3 convolutional layers and activation function LReLU; among them, the convolutional layers and activation function LReLU convolution steps located in the singular position from left to right The length is 1, and the convolution stride of the convolutional layer at the even position and the activation function LReLU is 2.
  • An electronic device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to implement the first aspect Steps of a high-energy image synthesis method based on the Wasserstein generative adversarial network model.
  • a computer-readable storage medium where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the high-energy image synthesis method based on the Wasserstein generative adversarial network model as described in the first aspect is implemented. step.
  • the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images, and a preset loss function, and is obtained by training a preset generative adversarial network model, and the preset loss function, at least according to the The loss function for image noise and image artifact removal is established. Therefore, by inputting the low-energy image to be synthesized into the Wasserstein generative adversarial network model obtained by pre-training, the method of synthesizing the target high-energy image can reduce the pair of image noise and image artifact. The effect of image edges, thereby improving the quality of the synthetic target high-energy image.
  • FIG. 1 is a schematic diagram of the implementation flow of a high-energy image synthesis method based on a Wasserstein generative adversarial network model provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a generator network of a Wasserstein generative adversarial network model provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a discriminator network structure of a Wasserstein generative adversarial network model provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a model training implementation flow of a Wasserstein generative adversarial network model provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an application process of the method provided by the embodiment of the present application in practice.
  • FIG. 6 is a schematic structural diagram of a high-energy image synthesis device based on a Wasserstein generative adversarial network model according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides a high-energy image synthesis method based on the Wasserstein generative adversarial network model.
  • the execution body of the method may be various types of computing devices, or may be an application or an application (Application, APP) installed on the computing device.
  • the computing device for example, may be a user terminal such as a mobile phone, a tablet computer, a smart wearable device, or the like, or a server.
  • the embodiment of the present application takes the execution body of the method as a server as an example to introduce the method.
  • the server is used as an example to introduce the method in this embodiment of the present application, which is only an exemplary illustration, and does not limit the protection scope of the claims corresponding to the solution.
  • FIG. 1 The implementation process of the method provided by the embodiment of the present application is shown in FIG. 1 , and includes the following steps.
  • Step 11 acquiring the low-energy image to be synthesized.
  • the low-energy image can be understood as the energy spectral CT image of the imaging object under the low-dose radiation/low-energy radiation.
  • the low-energy image may include a lung energy spectrum CT image under a low-dose X-ray.
  • spectral CT images obtained under low-dose radiation/low-energy radiation may contain a lot of noise and artifacts, which affect the image quality.
  • a preset method can be used based on the low-energy image to synthesize the low-energy image into a high-energy CT image with high density, high resolution, and low noise.
  • the low-energy image to be synthesized in the embodiment of the present application may include the low-energy CT image to be synthesized of the high-energy image.
  • the low-energy image to be synthesized can be acquired through the X-ray tube under the condition of lower tube current and lower tube voltage.
  • the low-energy image to be synthesized can also be obtained by using the statistical reconstruction method, taking advantage of the advantages of its accurate physical model and insensitivity to noise.
  • Step 12 Input the low-energy image to be synthesized into the Wasserstein generative adversarial network model obtained by pre-training to obtain the synthesized target high-energy image.
  • the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images and preset loss functions, and is obtained by training a preset generative adversarial network model.
  • the Wasserstein generative adversarial network model includes a generator network and a discriminator network.
  • the generator network is used for The image features of the low-energy images to be synthesized are extracted, and high-energy images are synthesized based on the image features; the discriminator network is used to judge the high-energy images synthesized by the generator network, and perform reverse adjustment training.
  • the target high-energy image can be understood as a high-energy CT image with high density, high resolution and low noise synthesized based on the low-energy image.
  • a standard high-energy image can be understood as a high-energy CT image with high density, high resolution, high texture detail, and low noise.
  • the low-energy image to be synthesized is input into the Wasserstein generative adversarial network model obtained by pre-training, and the synthesized target high-energy image is obtained, the low-energy image sample, the standard high-energy image and the preset loss function can be pre-based,
  • the Wasserstein generative adversarial network model is obtained by training the preset generative adversarial network model.
  • the generator network, the discriminator network, and the parameters of the preset generative adversarial network model may be predetermined; then, the preset generative adversarial network model is determined according to the determined generator network, the discriminator network and the parameters; Finally, the preset generative adversarial network model is trained based on low-energy image samples, standard high-energy images and a preset loss function to obtain the Wasserstein generative adversarial network model.
  • the generator network in this embodiment of the present application may include a 4-layer encoding and decoding semantic segmentation network U-Net and a feature extraction network.
  • a 9-layer residual network (Residual Block in the figure) can be included between the encoding layer and the decoding layer of the semantic segmentation network, and the residual network can be composed of nine 3x3 convolutions and ReLU activation functions.
  • a skip link mode may be selected to connect between each layer of codecs.
  • the feature extraction network can include two 3x3 convolution and ReLU activation functions (Conv+LReLU in the figure); usually, when entering the next network layer through the feature extraction network, the feature information extracted by the feature extraction network can be processed first.
  • the number of channels can be gradually doubled three times from 64 (n64 in the figure) in the first layer to 512 (n512 in the figure), and then reach the residual network.
  • the encoding process is symmetrical with the decoding process, and the final reconstruction network (Conv in the figure) is compressed to 1 channel (n1 in the figure) by 3*3 convolution.
  • the discriminator network can include 8 groups of 3*3 convolutional layers and activation functions LReLU (Conv+LReLU in the figure); among them, the convolutional layers and activation functions located in singular positions from left to right
  • the convolution stride s of LReLU is 1 (s1 in the figure), and the convolution stride s of the convolutional layer and the activation function LReLU at the even position is 2 (s2 in the figure).
  • the convolution stride s alternates between 1 and 2, respectively.
  • the number of channels n can be gradually doubled from 32 to 256.
  • the last two layers (FC(1024) LReLU and FC(1) in the figure) include two convolutional layers to determine whether the output image is Standard high energy image.
  • the objective function and corresponding parameters of the preset generative adversarial network model can be further determined.
  • a generative adversarial network model centered on the Wasserstein distance measure can be used as a preset generative adversarial network model, and the target parameters of the generative adversarial network model are shown in the following formula (1).
  • L WGAN (G, D) represents the Wasserstein adversarial network model
  • G represents the generator network of the Wasserstein adversarial network model
  • D represents the discriminator network of the Wasserstein adversarial network model
  • G represents the generator network G under the condition of a fixed discriminator network D
  • P r represents the probability distribution of high-energy images
  • P z represents the probability distribution of synthesized high-energy images
  • represents the penalty coefficient, which is used to avoid the mode collapse and gradient disappearance problems that occur during the training of the preset generative adversarial network model.
  • model training can be performed on the preset generative adversarial network model based on low-energy image samples, standard high-energy images and a preset loss function to obtain a Wasserstein generative adversarial network model.
  • Step 41 inputting the low-energy image samples into the generator network of the preset generative adversarial network model to obtain a synthesized first high-energy image.
  • a low-energy image sample with an image size of 256x256 can be input into the generator network of the preset generative adversarial network model, so that the feature extraction network in the generator network can extract the high-frequency information in the low-energy image based on the low-energy image. low-frequency information, and then perform image reconstruction on the extracted feature information to obtain a synthesized first high-energy image.
  • the high-frequency information and low-frequency information of the low-energy image can be extracted first through the feature extraction network in the generator network;
  • the information is encoded.
  • a pooling operation needs to be performed for the high-frequency information and low-frequency information of the low-energy image.
  • the channel is gradually doubled from 64 in the first layer to three times. is 512, reaching the residual network in the generator network; finally, decoding is performed based on the decoding layer of the semantic segmentation network.
  • an Upsampling operation needs to be performed first.
  • the channel is gradually compressed from 512 in the first layer to 64, and reaches the reconstruction network to obtain the first synthesized high-energy image.
  • Step 42 Input the first high-energy image into the discriminator network of the preset generative adversarial network model to obtain a first discrimination result.
  • the first high-energy image may be input into the discriminator network of the preset generative adversarial network model to obtain The first judgment result.
  • the generator network of the preset generative adversarial network model has converged, that is, the first high-energy image synthesized based on the generator network has reached the standard Criteria for high-energy images that stop the training of the generator network.
  • the generator network of the preset generative adversarial network model does not converge, that is, the first high-energy image synthesized based on the generator network
  • the standard high-energy images cannot be reached for the time being, and further training of the generator network is still required.
  • the above two situations are only an exemplary description of the embodiments of the present application, and do not impose any limitations on the embodiments of the present application.
  • the first discrimination result can represent that the first high-energy image is similar to the standard high-energy image, in order to avoid the situation that the discrimination result is inaccurate due to the low precision of the discriminator network, this The embodiment of the application may further train the generator network and the discriminator network of the preset generative adversarial network model based on the first discrimination result.
  • steps 43 to 44 please refer to the following steps 43 to 44 .
  • Step 43 Based on the first high-energy image and the standard high-energy image, a first loss value is calculated according to a preset loss function, and the first loss value is used to update the parameters of the preset generative adversarial network model until the preset generative adversarial network converges.
  • the preset loss function is established at least according to the loss function for reducing image noise and removing image artifacts.
  • the loss function can be a gradient loss function.
  • the gradient loss function used to reduce image noise and remove image artifacts can be based on the gradient of the standard high-energy image in the x direction, the gradient of the standard high-energy image in the y direction, the gradient of the synthetic high-energy image in the x direction, and the synthetic high-energy image.
  • the gradient in the y direction is established. For example, as shown in the following formula (2):
  • L gdl (G(x), Y) represents the gradient loss function
  • G(x) represents the synthetic high-energy image
  • Y represents the standard high-energy image
  • the preset loss function is the gradient loss function shown in the above formula (2) for reducing image noise and removing image artifacts as an example, then based on the first high-energy image and the standard high-energy image, The first loss value is calculated according to the preset loss function, and the first loss value is used to update the parameters of the preset generative adversarial network model until the preset generative adversarial network converges, which may be as follows.
  • the gradient loss function shown in formula (2) is used to calculate the gradient difference between the first high-energy image and the standard high-energy image, and the calculated gradient difference is determined as the first loss value.
  • Step 44 update the preset generative adversarial network model based on the first loss value and the first discrimination result until the preset generative adversarial network model converges, and determine the converged preset generative adversarial network model as the Wasserstein generative adversarial network model.
  • the Adam optimizer can be used to optimize the preset generative adversarial network model based on the first loss value and the first discrimination result, and when the preset loss When the curve of the function converges to the preset range, the converged preset generative adversarial network model is determined as the Wasserstein generative adversarial network model.
  • the low-energy image to be synthesized can be input into the pre-trained Wasserstein generative adversarial network model to obtain the synthesized target high-energy image.
  • the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images and a preset loss function, it is obtained by training a preset generative adversarial network model, and the preset loss function is at least based on the use of The loss function is established to reduce image noise and remove image artifacts. Therefore, by inputting the low-energy image to be synthesized into the Wasserstein generative adversarial network model obtained by pre-training, and synthesizing the target high-energy image, image noise and image noise can be reduced. The effect of artifacts on the edges of the image, thereby improving the quality of the synthetic target high-energy image.
  • the embodiment of the present application is also based on the high-energy image synthesis method of the Wasserstein generative adversarial network model, to solve the problem of large interference deviation when the CT image is obtained by scanning the dual-energy CT method in the prior art, The problem of poor image quality.
  • the preset loss function in step 43 in Embodiment 1 of the present application may further include a preset pixel difference calibration function for calibrating the pixel difference between the synthesized high-energy image and the standard high-energy image.
  • the preset pixel difference calibration function for calibrating the pixel difference between the synthesized high-energy image and the standard high-energy image may be shown in the following formula (3).
  • L MSE (G(x), Y) represents the preset pixel difference calibration function
  • G(x) represents the synthesized first high-energy image
  • Y represents the standard high-energy image
  • w and h represent the sampling width and height, respectively
  • ( i, j) represent the pixel points of the image.
  • the preset loss function may also include a structure used to calibrate the synthesized high-energy image and the standard high-energy image.
  • the preset structural loss function of the sexual information difference for example, is shown in the following formula (4).
  • L SSIM (G(x),Y) represents the preset structural loss function
  • G(x) represents the synthesized first high-energy image
  • Y represents the standard high-energy image
  • SSIM(G(x),Y) represents the structural
  • the similarity function is calculated as shown in the following formula (5).
  • the preset loss function is also A preset multi-scale feature loss function for calibrating the difference in texture information between the synthesized high-energy image and the standard high-energy image may be included. After adding the preset loss function, high-frequency information of the image can be effectively extracted.
  • the preset multi-scale feature loss function used for calibrating the difference in texture information between the synthesized high-energy image and the standard high-energy image may be, for example, as shown in the following formula (6).
  • L content (G(x), Y) represents the preset multi-scale feature loss function
  • G(x) represents the first synthesized high-energy image
  • Y represents the standard high-energy image
  • conv represents the multi-scale convolution kernel
  • m represents The number of multi-scale convolution kernels
  • size is the size of the sampled image
  • ⁇ m is the weight of each scale, for example, it can be set to 0.3, 0.2 and 0.3.
  • the preset loss function in this embodiment of the present application may also be established according to at least one of the following loss functions.
  • a preset pixel disparity calibration function for calibrating the pixel disparity between the synthesized high-energy image and the standard high-energy image
  • Preset multi-scale feature loss functions for calibrating the difference in texture information between synthetic high-energy images and standard high-energy images.
  • the preset loss function is established based on the preset gradient loss function, the preset pixel difference calibration function, the preset structural loss function, the preset multi-scale feature loss function, and the preset generative adversarial network model as an example. Steps 43 and 44 in Example 1 are described:
  • the preset function may be shown in the following formula (7).
  • G represents the generator network of the preset generative adversarial network model
  • D represents the discriminator network of the preset generative adversarial network model
  • L MSE G(x), Y) represents the preset pixel difference calibration function
  • L SSIM G(x), Y) represents the preset structural loss function
  • L content (G(x), Y) represents the preset multi-scale feature loss function
  • L gdl (G(x), Y) represents Gradient loss function
  • ⁇ adv , ⁇ mse , ⁇ ssim , ⁇ content , and ⁇ gdl respectively represent the weight of each loss function.
  • the weight of each loss function can be set as a hyperparameter.
  • the first loss value can be calculated according to the preset loss function based on the first high-energy image and the standard high-energy image, and the first loss value is used to update the parameters of the preset generative adversarial network model. , until the preset generative adversarial network converges.
  • a pixel difference value between the first high-energy image and the standard high-energy image can be calculated by using a preset pixel difference calibration function, and the pixel difference value is determined as the first first loss value.
  • the structural difference value between the first high-energy image and the standard high-energy image is determined, and the structural difference value is determined as the second first loss value.
  • the difference value of texture information between the first high-energy image and the standard high-energy image is determined, and the difference value of texture information is determined as the third first loss value.
  • the gradient difference between the first high-energy image and the standard high-energy image is determined through the gradient loss function for reducing image noise and removing image artifacts, and the gradient difference is determined as the fourth first loss value.
  • the weighted summation can be performed based on the above four loss values to determine the final first loss value, and then the preset generation confrontation can be updated based on the final first loss value and the first discrimination result network model until the preset generative adversarial network model converges, and the converged preset generative adversarial network model is determined as the Wasserstein generative adversarial network model.
  • the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images and a preset loss function, it is obtained by training a preset generative adversarial network model, and the preset loss function is at least based on the use of A loss function for reducing image noise and removing image artifacts, a preset pixel disparity calibration function for calibrating the pixel disparity between the synthesized high-energy image and the standard high-energy image, and a calibration function for calibrating the difference between the synthesized high-energy image and the standard high-energy image.
  • a preset structural loss function for the structural information difference between the The input to the pre-trained Wasserstein generative adversarial network model, the method of synthesizing the target high-energy image can reduce the influence of image noise and image artifacts on the edge of the image, thereby improving the quality of the synthesized target high-energy image.
  • the pixel difference between the synthesized high-energy image and the standard high-energy image can also be calibrated to avoid differences in the details of the synthesized high-energy image; the structural information difference between the synthesized high-energy image and the standard high-energy image can be calibrated to ensure the synthesis The structural information, image brightness and contrast, etc. of the high-energy image; and, the difference in texture information between the synthesized high-energy image and the standard high-energy image can also be calibrated to ensure that the local pattern and texture information of the image can be effectively extracted.
  • FIG. 5 is a schematic diagram of an actual application process of the method provided by the embodiment of the present application. The process includes the following steps.
  • a standard high-energy image (HECT in the figure) is obtained, the standard high-energy image is sliced, and then the discriminator network (Discriminator in the figure) of the Wasserstein generative adversarial network model is trained based on the standard high-energy image, and based on the training The latter discriminator network discriminates the synthesized high-energy images.
  • the gradient difference between the synthesized high-energy image and the standard high-energy image can be calculated based on the gradient flow (Gradient Flow in the figure), and then reversed based on the preset gradient loss function (Gradient Difference in the figure) Update the parameters of the generator network.
  • the preset gradient loss function can also include (MES in the figure), and in order to ensure the image brightness, contrast and structural information of the synthesized high-energy image, the preset gradient loss function can also be Including (SSIM in the figure), and ensuring that the local pattern and texture information of the image can be effectively extracted, the preset gradient loss function can also include (Content in the figure).
  • the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images and a preset loss function, it is obtained by training a preset generative adversarial network model, and the preset loss function is at least based on the use of A loss function for reducing image noise and removing image artifacts, a preset pixel disparity calibration function for calibrating the pixel disparity between the synthesized high-energy image and the standard high-energy image, and a calibration function for calibrating the difference between the synthesized high-energy image and the standard high-energy image.
  • a preset structural loss function for the structural information difference between the The input to the pre-trained Wasserstein generative adversarial network model, the method of synthesizing the target high-energy image can reduce the influence of image noise and image artifacts on the edge of the image, thereby improving the quality of the synthesized target high-energy image.
  • the pixel difference between the synthesized high-energy image and the standard high-energy image can also be calibrated to avoid differences in the details of the synthesized high-energy image; the structural information difference between the synthesized high-energy image and the standard high-energy image can be calibrated to ensure the synthesis The structural information, image brightness and contrast, etc. of the high-energy image; and, the difference in texture information between the synthesized high-energy image and the standard high-energy image can also be calibrated to ensure that the local pattern and texture information of the image can be effectively extracted.
  • the above provides a high-energy image synthesis method based on the Wasserstein generative adversarial network model provided by the embodiment of the present application. Based on the same idea, the embodiment of the present application also provides a high-energy image synthesis device based on the Wasserstein generative confrontation network model, as shown in FIG. 6 . shown.
  • the device 60 includes: an acquisition module 61 and an input module 62, wherein: the acquisition module 61 is used to acquire the low-energy image to be synthesized; the input module 62 is used to input the low-energy image to be synthesized into the Wasserstein generative confrontation obtained by pre-training network model to obtain the synthesized target high-energy image; among them, the Wasserstein generative adversarial network model is trained by the preset generative adversarial network model learning method; the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images and preset loss functions, Obtained through the training of a preset generative adversarial network model, the Wasserstein generative adversarial network model includes a generator network and a discriminator network.
  • the generator network is used to extract the image features of the low-energy images to be synthesized, and synthesize high-energy images based on the image features; the discriminator network. It is used to judge the high-energy images synthesized by the generator network, and perform reverse adjustment training; the preset loss function is established at least according to the loss function used to reduce image noise and remove image artifacts.
  • the loss function used to reduce image noise and remove image artifacts can be based on the gradient of the standard high-energy image in the x-direction, the gradient of the standard high-energy image in the y-direction, the gradient of the synthetic high-energy image in the x-direction, and the synthetic high-energy image.
  • the gradient of the image in the y direction is built.
  • the preset loss function may also be established according to at least one of the following loss functions: a preset pixel difference calibration function for calibrating the pixel difference between the synthesized high-energy image and the standard high-energy image; a preset pixel difference calibration function for calibrating the synthesized high-energy image A preset structural loss function for the structural information difference between the high-energy image and the standard high-energy image; a preset multi-scale feature loss function for calibrating the texture information difference between the synthesized high-energy image and the standard high-energy image.
  • the preset loss function is established according to a preset gradient loss function, a preset pixel difference calibration function, a preset structural loss function, a preset multi-scale feature loss function, and a preset generative adversarial network model.
  • the device further includes: a training module for obtaining a Wasserstein generative adversarial network model by training a preset generative adversarial network model based on low-energy image samples, standard high-energy images and a preset loss function; wherein, the training module includes: The first input unit is used to input the low-energy image samples into the generator network of the preset generative adversarial network model to obtain the synthesized first high-energy image; the second input unit is used to input the first high-energy image to the preset generative adversarial network.
  • the discriminator network of the network model obtains the first discrimination result; the computing unit is used for calculating the first loss value according to the preset loss function based on the first high-energy image and the standard high-energy image, and the first loss value is used to update the preset generation parameters of the adversarial network model until the preset generative adversarial network converges; the updating unit is used to update the preset generative adversarial network model based on the first loss value and the first discrimination result until the preset generative adversarial network model converges, and after the convergence
  • the preset generative adversarial network model is determined to be the Wasserstein generative adversarial network model.
  • the calculation unit is configured to: calculate the pixel difference value between the first high-energy image and the standard high-energy image by using the preset pixel difference calibration function; The difference value is determined as the first loss value.
  • the computing unit is configured to: determine the structural difference value between the first high-energy image and the standard high-energy image by using the preset structural loss function; The difference value is determined as the first loss value.
  • the computing unit is configured to: determine the texture information difference value between the first high-energy image and the standard high-energy image by using the preset multi-scale feature loss function ; Determine the texture information difference value as the first loss value.
  • the generator network of the Wasserstein generative adversarial network model includes a semantic segmentation network with 4 layers of encoders and decoders. Each layer of encoders and decoders is connected by a skip link, and the encoding layer and the decoding layer of the semantic segmentation network include 9 layers of Residual network;
  • the discriminator network of the Wasserstein generative adversarial network model includes 8 groups of 3*3 convolutional layers and activation function LReLU; among them, the convolutional layers and activation function LReLU convolution steps located in the singular position from left to right The length is 1, and the convolution stride of the convolutional layer at the even position and the activation function LReLU is 2.
  • the Wasserstein generative adversarial network model is based on low-energy image samples, standard high-energy images, and a preset loss function, it is obtained by training a preset generative adversarial network model, and the preset loss function is at least based on the use of The loss function is established to reduce image noise and remove image artifacts. Therefore, the low-energy image to be synthesized is input into the pre-trained Wasserstein generative adversarial network model through the input module to obtain the synthesized target high-energy image, which can reduce image noise. and image artifacts on the edge of the image, thereby improving the quality of the synthetic target high-energy image.
  • the electronic device 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, and a display unit 706 , the user input unit 707 , the interface unit 708 , the memory 709 , the processor 710 , and the power supply 711 and other components.
  • a radio frequency unit 701 for example, a radio frequency unit
  • the electronic device 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, and a display unit 706 , the user input unit 707 , the interface unit 708 , the memory 709 , the processor 710 , and the power supply 711 and other components.
  • the structure of the electronic device shown in FIG. 7 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than the one shown, or combine some components, or different components layout.
  • electronic devices include, but are not limited to, mobile phones
  • the processor 710 is used to obtain the low-energy image to be synthesized; input the low-energy image to be synthesized into the Wasserstein generative adversarial network model obtained by pre-training to obtain the synthesized target high-energy image; the Wasserstein generative adversarial network model is based on low-energy image samples , a standard high-energy image and a preset loss function, which are obtained by training a preset generative adversarial network model.
  • the Wasserstein generative adversarial network model includes a generator network and a discriminator network.
  • the generator network is used to extract the image features of the low-energy images to be synthesized, and High-energy images are synthesized based on image features;
  • the discriminator network is used to judge the high-energy images synthesized by the generator network and perform reverse adjustment training; preset loss functions, at least according to the loss used to reduce image noise and remove image artifacts function creation.
  • the loss function used to reduce image noise and remove image artifacts based on the gradient of the standard high-energy image in the x direction, the gradient of the standard high-energy image in the y direction, the gradient of the synthetic high-energy image in the x direction, and the synthetic high-energy image.
  • the gradient in the y direction is established.
  • the preset loss function can also be established according to at least one of the following loss functions: a preset pixel difference calibration function for calibrating the pixel difference between the synthesized high-energy image and the standard high-energy image; a preset pixel difference calibration function for calibrating the synthesized high-energy image A preset structural loss function for the structural information difference between the high-energy image and the standard high-energy image; a preset multi-scale feature loss function for calibrating the texture information difference between the synthesized high-energy image and the standard high-energy image.
  • the preset loss function is established according to a preset gradient loss function, a preset pixel difference calibration function, a preset structural loss function, a preset multi-scale feature loss function, and a preset generative adversarial network model.
  • the method before inputting the low-energy image to be synthesized into the Wasserstein generative adversarial network model obtained by pre-training to obtain the synthesized target high-energy image, the method further includes: based on the low-energy image sample, the standard high-energy image and the preset loss function, through The Wasserstein generative adversarial network model is obtained by training the preset generative adversarial network model; wherein, based on low-energy image samples, standard high-energy images and a preset loss function, the Wasserstein generative adversarial network model is obtained by training the preset generative adversarial network model, including: The sample is input to the generator network of the preset generative adversarial network model to obtain a synthesized first high-energy image; the first high-energy image is input to the discriminator network of the preset generative adversarial network model to obtain a first discrimination result; based on the first high-energy image For the image and the standard high-energy image, the first loss value is calculated according to the preset loss function, and the first loss value is
  • the preset loss function includes a preset pixel difference calibration function
  • the first loss value is calculated and obtained according to the preset loss function, including: using the preset pixel difference calibration function, Calculate the pixel difference value between the first high-energy image and the standard high-energy image; determine the pixel difference value as the first loss value.
  • the preset loss function includes a preset structural loss function
  • the first loss value is calculated and obtained according to the preset loss function, including: by using the preset structural loss function, A structural difference value between the first high-energy image and the standard high-energy image is determined; the structural difference value is determined as a first loss value.
  • the preset loss function includes a preset multi-scale feature loss function
  • the first loss value is calculated and obtained according to the preset loss function, including: using the preset multi-scale feature loss function to determine the texture information difference value between the first high-energy image and the standard high-energy image; and determine the texture information difference value as the first loss value.
  • the generator network of the Wasserstein generative adversarial network model includes a semantic segmentation network with 4 layers of encoders and decoders. Each layer of encoders and decoders is connected by a skip link, and the encoding layer and the decoding layer of the semantic segmentation network include 9 layers of Residual network;
  • the discriminator network of the Wasserstein generative adversarial network model includes 8 groups of 3*3 convolutional layers and activation function LReLU; among them, the convolutional layers and activation function LReLU convolution steps located in the singular position from left to right The length is 1, and the convolution stride of the convolutional layer at the even position and the activation function LReLU is 2.
  • the memory 709 is used to store a computer program that can be executed on the processor 710.
  • the computer program is executed by the processor 710, the above-mentioned functions implemented by the processor 710 are implemented.
  • the radio frequency unit 701 can be used for receiving and sending signals during sending and receiving of information or during a call. Specifically, after receiving the downlink data from the base station, it is processed by the processor 710; The uplink data is sent to the base station.
  • the radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 701 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides the user with wireless broadband Internet access through the network module 702, such as helping the user to send and receive emails, browse web pages, access streaming media, and the like.
  • the audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into audio signals and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the electronic device 700 (eg, call signal reception sound, message reception sound, etc.).
  • the audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 704 is used to receive audio or video signals.
  • the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, and the graphics processor 7041 is used for still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode data is processed.
  • the processed image frames may be displayed on the display unit 706 .
  • the image frames processed by the graphics processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio frequency unit 701 or the network module 702 .
  • the microphone 7042 can receive sound and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 701 for output in the case of a telephone call mode.
  • the electronic device 700 also includes at least one sensor 705, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 7061 according to the brightness of the ambient light, and the proximity sensor can turn off the display panel 7061 and the display panel 7061 when the electronic device 700 is moved to the ear. / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games , magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; the sensor 705 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors, etc., are not repeated here.
  • the display unit 706 is used to display information input by the user or information provided to the user.
  • the display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 707 may be used to receive input numerical or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 707 includes a touch panel 7071 and other input devices 7072 .
  • the touch panel 7071 also referred to as a touch screen, can collect touch operations by the user on or near it (such as the user's finger, stylus, etc., any suitable object or attachment on or near the touch panel 7071). operate).
  • the touch panel 7071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller.
  • the touch panel 7071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the user input unit 707 may also include other input devices 7072 .
  • other input devices 7072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the touch panel 7071 can be covered on the display panel 7071.
  • the touch panel 7071 detects a touch operation on or near it, it transmits it to the processor 710 to determine the type of the touch event, and then the processor 710 determines the type of the touch event according to the touch
  • the type of event provides a corresponding visual output on display panel 7061.
  • the touch panel 7071 and the display panel 7061 are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 708 is an interface for connecting an external device to the electronic device 700 .
  • external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 708 may be used to receive input from external devices (eg, data information, power, etc.) and transmit the received input to one or more elements within the electronic device 700 or may be used between the electronic device 700 and the external Transfer data between devices.
  • the memory 709 may be used to store software programs as well as various data.
  • the memory 709 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of the mobile phone (such as audio data, phone book, etc.), etc.
  • memory 709 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 710 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire electronic device, by running or executing the software programs and/or modules stored in the memory 709, and calling the data stored in the memory 709. , perform various functions of electronic equipment and process data, so as to monitor electronic equipment as a whole.
  • the processor 710 may include one or more processing units; optionally, the processor 710 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc., and the modem
  • the modulation processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 710.
  • the electronic device 700 may also include a power supply 711 (such as a battery) for supplying power to various components.
  • a power supply 711 (such as a battery) for supplying power to various components.
  • the power supply 711 may be logically connected to the processor 710 through a power management system, so as to manage charging, discharging, and power consumption through the power management system management and other functions.
  • the electronic device 700 includes some functional modules not shown, which will not be repeated here.
  • Embodiments of the present application further provide an electronic device, including a processor 710, a memory 709, and a computer program stored in the memory 709 and running on the processor 710.
  • the computer program is executed by the processor 710, the above-mentioned based
  • Each process of the embodiment of the high-energy image synthesis method of the Wasserstein generative adversarial network model can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, implements each of the above-mentioned embodiments of the high-energy image synthesis method based on the Wasserstein generative adversarial network model process, and can achieve the same technical effect, in order to avoid repetition, it will not be repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM), magnetic disk or optical disk and so on.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flows of the flowcharts and/or the block or blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include forms of non-persistent memory, random access memory (RAM) and/or non-volatile memory in computer readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

Abstract

L'Invention concerne un procédé de synthèse d'image à haute énergie basé sur un modèle de réseau antagoniste génératif de Wasserstein. Le procédé comprend les étapes consistant à : obtenir une image à faible énergie devant être synthétisée (S11) ; et entrer l'image à faible énergie devant être synthétisée dans un modèle de réseau antagoniste génératif de Wasserstein pré-entraîné pour obtenir une image à haute énergie cible synthétisée (S12), le modèle de réseau antagoniste génératif de Wasserstein étant obtenu par entraînement d'un modèle de réseau antagoniste génératif prédéfini sur la base d'un échantillon d'image à faible énergie, d'une image à haute énergie standard, et d'une fonction de perte prédéfinie, et la fonction de perte prédéfinie étant établie au moins en fonction d'une fonction de perte utilisée pour réduire le bruit d'image et éliminer les artéfacts d'image.
PCT/CN2020/137188 2020-12-17 2020-12-17 Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein WO2022126480A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137188 WO2022126480A1 (fr) 2020-12-17 2020-12-17 Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137188 WO2022126480A1 (fr) 2020-12-17 2020-12-17 Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein

Publications (1)

Publication Number Publication Date
WO2022126480A1 true WO2022126480A1 (fr) 2022-06-23

Family

ID=82058817

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137188 WO2022126480A1 (fr) 2020-12-17 2020-12-17 Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein

Country Status (1)

Country Link
WO (1) WO2022126480A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345238A (zh) * 2022-08-17 2022-11-15 中国人民解放军61741部队 一种海水透明度融合数据的生成方法及生成装置
CN116822623A (zh) * 2023-08-29 2023-09-29 苏州浪潮智能科技有限公司 一种生成对抗网络联合训练方法、装置、设备及存储介质
CN117009913A (zh) * 2023-05-05 2023-11-07 中国人民解放军61741部队 基于卫星高度计和验潮站的海面高度异常数据的融合方法
CN117611473A (zh) * 2024-01-24 2024-02-27 佛山科学技术学院 一种同步去噪的图像融合方法及其相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190128989A1 (en) * 2017-11-01 2019-05-02 Siemens Healthcare Gmbh Motion artifact reduction of magnetic resonance images with an adversarial trained network
CN110580695A (zh) * 2019-08-07 2019-12-17 深圳先进技术研究院 一种多模态三维医学影像融合方法、系统及电子设备
CN111275647A (zh) * 2020-01-21 2020-06-12 南京信息工程大学 一种基于循环生成对抗网络的水下图像复原方法
CN111582348A (zh) * 2020-04-29 2020-08-25 武汉轻工大学 条件生成式对抗网络的训练方法、装置、设备及存储介质
CN111696066A (zh) * 2020-06-13 2020-09-22 中北大学 基于改进wgan-gp的多波段图像同步融合与增强方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190128989A1 (en) * 2017-11-01 2019-05-02 Siemens Healthcare Gmbh Motion artifact reduction of magnetic resonance images with an adversarial trained network
CN110580695A (zh) * 2019-08-07 2019-12-17 深圳先进技术研究院 一种多模态三维医学影像融合方法、系统及电子设备
CN111275647A (zh) * 2020-01-21 2020-06-12 南京信息工程大学 一种基于循环生成对抗网络的水下图像复原方法
CN111582348A (zh) * 2020-04-29 2020-08-25 武汉轻工大学 条件生成式对抗网络的训练方法、装置、设备及存储介质
CN111696066A (zh) * 2020-06-13 2020-09-22 中北大学 基于改进wgan-gp的多波段图像同步融合与增强方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345238A (zh) * 2022-08-17 2022-11-15 中国人民解放军61741部队 一种海水透明度融合数据的生成方法及生成装置
CN115345238B (zh) * 2022-08-17 2023-04-07 中国人民解放军61741部队 一种海水透明度融合数据的生成方法及生成装置
CN117009913A (zh) * 2023-05-05 2023-11-07 中国人民解放军61741部队 基于卫星高度计和验潮站的海面高度异常数据的融合方法
CN117009913B (zh) * 2023-05-05 2024-01-30 中国人民解放军61741部队 基于卫星高度计和验潮站的海面高度异常数据的融合方法
CN116822623A (zh) * 2023-08-29 2023-09-29 苏州浪潮智能科技有限公司 一种生成对抗网络联合训练方法、装置、设备及存储介质
CN116822623B (zh) * 2023-08-29 2024-01-12 苏州浪潮智能科技有限公司 一种生成对抗网络联合训练方法、装置、设备及存储介质
CN117611473A (zh) * 2024-01-24 2024-02-27 佛山科学技术学院 一种同步去噪的图像融合方法及其相关设备
CN117611473B (zh) * 2024-01-24 2024-04-23 佛山科学技术学院 一种同步去噪的图像融合方法及其相关设备

Similar Documents

Publication Publication Date Title
WO2022126480A1 (fr) Procédé et dispositif de synthèse d'image à haute énergie basés sur un modèle de réseau antagoniste génératif de wasserstein
JP7085062B2 (ja) 画像セグメンテーション方法、装置、コンピュータ機器およびコンピュータプログラム
CN112634390B (zh) 基于Wasserstein生成对抗网络模型的高能图像合成方法、装置
CN110059744A (zh) 训练神经网络的方法、图像处理的方法、设备及存储介质
CN107895369B (zh) 图像分类方法、装置、存储介质及设备
US11107212B2 (en) Methods and systems for displaying a region of interest of a medical image
CN111223143B (zh) 关键点检测方法、装置及计算机可读存储介质
CN113610750B (zh) 对象识别方法、装置、计算机设备及存储介质
WO2019233216A1 (fr) Procédé, appareil et dispositif de reconnaissance de gestes
CN112651890A (zh) 基于双编码融合网络模型的pet-mri图像去噪方法、装置
CN110110045B (zh) 一种检索相似文本的方法、装置以及存储介质
WO2019141193A1 (fr) Procédé et appareil de traitement de données de trame vidéo
WO2021114847A1 (fr) Procédé et appareil d'appel internet, dispositif informatique et support de stockage
CN111031234B (zh) 一种图像处理方法及电子设备
CN110781899A (zh) 图像处理方法及电子设备
US20160150986A1 (en) Living body determination devices and methods
CN107566749A (zh) 拍摄方法及移动终端
CN110991457A (zh) 二维码处理方法、装置、电子设备及存储介质
CN111107357B (zh) 一种图像处理的方法、装置、系统及存储介质
CN110766610A (zh) 一种超分辨率图像的重建方法及电子设备
CN108647566B (zh) 一种识别皮肤特征的方法及终端
CN111028161B (zh) 图像校正方法及电子设备
CN108846817B (zh) 图像处理方法、装置以及移动终端
CN113822955B (zh) 图像数据处理方法、装置、计算机设备及存储介质
CN111982293B (zh) 体温测量方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965496

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965496

Country of ref document: EP

Kind code of ref document: A1