CN112529949A - Method and system for generating DWI image based on T2 image - Google Patents

Method and system for generating DWI image based on T2 image Download PDF

Info

Publication number
CN112529949A
CN112529949A CN202011422209.9A CN202011422209A CN112529949A CN 112529949 A CN112529949 A CN 112529949A CN 202011422209 A CN202011422209 A CN 202011422209A CN 112529949 A CN112529949 A CN 112529949A
Authority
CN
China
Prior art keywords
image
dwi
data set
network model
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011422209.9A
Other languages
Chinese (zh)
Inventor
刘涛
张心雨
吴振洲
付鹤
程健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ande Yizhi Technology Co ltd
Original Assignee
Beijing Ande Yizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ande Yizhi Technology Co ltd filed Critical Beijing Ande Yizhi Technology Co ltd
Priority to CN202011422209.9A priority Critical patent/CN112529949A/en
Publication of CN112529949A publication Critical patent/CN112529949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for generating a DWI image based on a T2 image, wherein the method comprises the following steps: acquiring T2 images with and without focuses and DWI images matched with the focuses, and forming data pairs by the T2 images and the DWI images in a one-to-one correspondence mode to form a first data set; preprocessing the first data set to obtain a second data set; cutting the image in the second data set into patch to obtain a third data set; training the improved confrontation network model by using a third data set to obtain the trained confrontation network model, and acquiring a T2 image to be processed and recording the image as a first image; preprocessing the first image to obtain a second image; cutting the second image into patch to obtain a third image; inputting the third image into the trained confrontation network model to obtain a patch of the DWI image; and splicing and combining the patches of the DWI image to obtain the DWI image. The invention can generate corresponding DWI video information by using the existing T2 video information, and improves the quality of the generated DWI image.

Description

Method and system for generating DWI image based on T2 image
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for generating a DWI image based on a T2 image.
Background
Magnetic Resonance Imaging (MRI) is increasingly highlighted in various medical imaging techniques for its safety and richness of information, and is widely used in clinical diagnosis and treatment. Magnetic resonance imaging has different modes, each of which captures certain features of the underlying anatomy and provides a unique view of the intrinsic magnetic resonance parameters.
For example, diffusion-weighted imaging (DWI) is a new MRI technique developed in the early and middle of the 90 s of the 20 th century, and is introduced and clinically popularized in the middle of the 90 s domestically. The DWI is the only noninvasive method capable of detecting the water molecule diffusion movement in the living tissue, is more prominent in the aspect of lesion display on the basis of inhibiting tissue background signals, and provides important information for diagnosing ultra-early cerebral ischemia, cerebral infarction, cerebral abscess, brain tumor and the like. However, for logistical reasons, most existing datasets contain only T2 images, no DWI images, whereas T2 images in many cases do not show lesions, which are only visible on DWIs that show diffuse motion of water molecules, and thus require the use of image generation techniques to generate DWI images using T2 images. Thus, the generation of a corresponding DWI image from a T2 image is a very considerable problem to be investigated, with the aim of obtaining a corresponding DWI image without actual scanning, using the existing T2 image information.
The existing DWI acquisition methods include the following:
magnetic resonance imaging: the method performs a scanning operation using magnetic resonance to obtain a DWI.
The DWI is generated cross-modal by conventional generative models (VAE, PixelCNN, Glow, etc.): the method models the nonlinear relation of different modes, so as to generate the image.
DWI generation across modalities through a traditional antagonistic network model (GAN): the method utilizes a T2 image and a paired DWI image for training, and a trained model is obtained for generating DWI.
The existing tumor image brain region segmentation method comprises the following steps:
1) magnetic resonance imaging: the magnetic resonance scanning has high cost, is easily influenced by physiological motions such as cerebrospinal fluid, heartbeat, respiration and the like, and has higher artifact risk.
2) The DWI is generated cross-modal by conventional generative models (VAE, PixelCNN, Glow, etc.): because the difference of the image characteristics of the cross-modality is large, the relation between any two modalities of the MRI is highly nonlinear, and the existing traditional model generation method cannot well simulate the nonlinear relation between images of different modalities, so that the generated images are slightly blurred, and the image generation task is not good.
3) DWI generation across modalities through a traditional antagonistic network model (GAN): the idea of GAN is to use game to continuously optimize the generator and the discriminator so that the generated image and the real image are more and more similar in distribution. The magnetic resonance image is a three-dimensional image, and the training sample size is much smaller than that of a natural image, so that higher challenges are provided for generating a structure of a countermeasure network, and a convolutional neural network is required to fully extract and utilize information contained in the magnetic resonance image. The generation countermeasure network applied to the magnetic resonance image cross-mode generation task is more traditional at present, and the extraction and utilization efficiency of the features is more limited.
Disclosure of Invention
The invention aims to provide a method and a system for generating a DWI image based on a T2 image, which can improve the quality of the generated DWI image.
In order to achieve the purpose, the invention provides the following scheme:
a method of generating a DWI image based on a T2 image, comprising:
acquiring a T2 image with and without a focus and a DWI image matched with the T2 image, and forming a data pair by one-to-one correspondence of the T2 image and the DWI image to form a first data set;
preprocessing the first data set to obtain a second data set;
cutting the image in the second data set into a patch to obtain a third data set;
training an improved confrontation network model by using the third data set to obtain the trained confrontation network model, wherein the improved confrontation network model comprises a generator, a first discriminator and a second discriminator;
acquiring a T2 image to be processed, and recording the T2 image as a first image;
preprocessing the first image to obtain a second image;
cutting the second image into a patch to obtain a third image;
inputting the third image into the trained confrontation network model to obtain a patch of the DWI image;
and splicing and combining the patches of the DWI image to obtain the DWI image.
Optionally, the preprocessing includes image registration, skull stripping, image data normalization, and image resizing.
Optionally, the cutting the image in the second data set into patches includes cutting the image in the second data set into a plurality of patches, where pixels at a boundary between two adjacent patches in the plurality of patches overlap.
Optionally, the patch size is 64 × 64 × 5.
Optionally, the training the improved confrontation network model by using the third data set includes: the improved countermeasure network model is trained through a back propagation and gradient descent algorithm.
Optionally, the generator adopts a UNet structure.
Optionally, the first discriminator is obtained by copying after the generator updates the parameters, and the weight of the first discriminator remains unchanged in the process of updating the weight through back propagation.
Optionally, the second discriminator adopts a CNN structure.
A system for generating a DWI image based on a T2 image, comprising:
the first data set construction module is used for acquiring a T2 image with and without a focus and a DWI image matched with the T2 image, and the T2 image and the DWI image are in one-to-one correspondence to form a data pair to form a first data set;
the first preprocessing module is used for preprocessing the first data set to obtain a second data set;
the first image cutting module is used for cutting the image in the second data set into a patch to obtain a third data set;
the first training module is used for training an improved confrontation network model by utilizing the third data set to obtain the trained confrontation network model, and the improved confrontation network model comprises a generator, a first discriminator and a second discriminator;
the image acquisition module is used for acquiring a T2 image to be processed and recording the T2 image as a first image;
the second preprocessing module is used for preprocessing the first image to obtain a second image;
the second image cutting module is used for cutting the second image into a patch to obtain a third image;
the confrontation network module is used for inputting the third image into the trained confrontation network model to obtain a patch of the DWI image;
and the image splicing module is used for splicing and combining the patches of the DWI image to obtain the DWI image.
Optionally, the first preprocessing module and the second preprocessing module include an image registration unit, a skull stripping unit, an image data standardization unit, and an image resizing unit.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the invention, the corresponding relation between the T2 image and the DWI image is constructed in advance, the improved confrontation network model is trained by using the corresponding relation, after the training is finished, the corresponding DWI image can be generated by using the existing T2 image under the condition of not actually scanning DWI, and the quality of the generated DWI image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method of generating a DWI image based on a T2 image according to the present invention;
FIG. 2 is a flow diagram of a framework for generating a DWI image based on a T2 image according to the present invention;
FIG. 3 is a framework diagram of an improved countermeasure network model T22DWI-GAN of the present invention;
FIG. 4 is a diagram of a generator network structure in the improved countermeasure network model of the present invention;
FIG. 5 is a block diagram of the convolution module of the generator 3x3 in the improved countermeasure network model of the present invention;
FIG. 6 is a block diagram of a generator downsampling module in the improved countermeasure network model of the present invention;
FIG. 7 is a block diagram of an upsampling module of a generator in an improved countermeasure network model of the present invention;
FIG. 8 is a diagram of a second discriminator in the improved countermeasure network model of the invention;
FIG. 9 is a block diagram of a system for generating a DWI image based on a T2 image according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method and a system for generating a DWI image based on a T2 image, which can improve the quality of the generated DWI image.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a method for generating a DWI image based on a T2 image according to the present invention, and as shown in fig. 1, a method for generating a DWI image based on a T2 image includes:
step 101: and acquiring a T2 image with and without a focus and a DWI image matched with the T2 image, and forming a data pair by one-to-one correspondence of the T2 image and the DWI image to form a first data set.
Step 102: and preprocessing the first data set to obtain a second data set.
Step 103: and cutting the image in the second data set into patch to obtain a third data set.
Step 104: and training an improved confrontation network model by using the third data set to obtain the trained confrontation network model, wherein the improved confrontation network model comprises a generator, a first discriminator and a second discriminator.
Step 105: a T2 image to be processed is acquired and recorded as a first image.
Step 106: and preprocessing the first image to obtain a second image.
Step 107: and cutting the second image into a patch to obtain a third image.
Step 108: and inputting the third image into the trained confrontation network model to obtain the patch of the DWI image.
Step 109: and splicing and combining the patches of the DWI image to obtain the DWI image.
For the convenience of understanding, the invention also discloses a frame diagram corresponding to the steps, as shown in FIG. 2;
specifically, in step 101, the present invention simultaneously uses a multi-center acquired T2 magnetic resonance image with no lesion and with lesion (or with no tumor and with tumor) and its paired DWI magnetic resonance image as raw data and as a first data set.
In step 102, the first dataset is preprocessed, including image registration, skull stripping, image data normalization, and image resizing.
Wherein the image registration comprises performing a non-linear registration operation on the original T2 image and the DWI image;
the skull stripping comprises the step of acquiring a skull stripping image of the magnetic resonance image after the registration through a preset threshold;
the image data standardization comprises the steps of calculating the average value and the standard deviation of voxels in the brain contour after the skull stripping, and carrying out Gaussian standardization on the voxels in the brain contour;
image resizing includes scaling the T2 image and DWI image with different sizes to 320 × 320 (only long size is processed, high is not processed) by nearest neighbor interpolation, and cropping 80% of the center of the image to obtain 256 × 256 (only long size is processed, high is not processed).
Step 103 is to cut the preprocessed T2 mr image and the paired DWI mr image into 64 × 64 × 5 patches with overlapping pixels, and the paired patches are stored in the order of the images before cutting to construct a training set, that is, a third data set.
And step 104, feeding the patch of the T2 magnetic resonance image and the paired DWI magnetic resonance image patch into an improved antagonistic network model (T22DWI-GAN network) by using a training set, training and learning the T22DWI-GAN network by using a back propagation and gradient descent algorithm, verifying the improved antagonistic network model T22DWI-GAN by using a cross verification method, adjusting hyper-parameters in model training, selecting model parameters with high prediction precision and strong generalization performance, and storing to obtain a trained model.
Step 105-.
The improved antagonistic network model T22DWI-GAN network of the present invention is described below:
the antagonistic network model T22DWI-GAN comprises a two-part model of a generator and an arbiter, as shown in FIG. 3;
the generator model adopts the structure of Unet, as shown in FIG. 4. The Unet performs 4 downsampling and 4 upsampling. And the down-sampling layer restores the high-level semantic feature graph obtained by up-sampling to the resolution of the original picture. In addition, the Unet also uses jump connection to ensure that the finally recovered feature map fuses more low-level features, so that the information such as the recovery edge of the segmentation map is more refined. Compared with a general convolution network, the UNet structure of the invention does not adopt a pooling layer because the pooling layer can cause certain blurring.
The generator model encode part composition based on the UNet structure sequentially comprises the following modules: first a 3x3 convolution module, followed by four downsampling modules. The 3x3 convolution module of the generator model encode part, as shown in fig. 5, includes two repeated convolution blocks, each including a stride 1, padding 1, kernel _ size 3x3 convolution layer, a batch normalization layer, and a relu activation layer. The downsampling module, as shown in fig. 6, includes a convolution unit of a convolution layer plus dropout layer with stride 2, padding 1, kernel _ size 4 × 4, and two repeated convolution blocks, where each convolution block includes a stride 1, padding 1, kernel _ size 3 × 3 convolution layer, a batch normalization layer, and a relu active layer.
The generator model decode part based on the UNet structure comprises the following modules in sequence: four upsampling modules, followed by a 1x1 convolutional layer with stride 1, padding 0, kernel _ size 1x 1. The up-sampling module of the decoder part of the generator model, as shown in fig. 7, has an input composed of two parts, namely, the down-sampling module and the feature of the up-sampling module of the same stage of the generator model. Furthermore, the up-sampling module comprises, in order, a two-part structure: and a residual connecting unit module of the deconvolution block and the generator model encode part. Firstly, the output of an up-sampling module of a generator model passes through a deconvolution layer, a batch normalization layer and a relu activation layer, wherein the deconvolution layer comprises a stride 2, a padding 0 and a kernel _ size 2, the feature map output by the deconvolution block and the feature map of the down-sampling block of the generator model of the same stage are spliced and combined, if the sizes of the feature maps are not consistent, the feature map of the down-sampling module is subjected to crop operation, and the combined feature maps are jointly fed into the later up-sampling residual connecting unit module.
The discriminator model mainly comprises two parts, as shown in FIG. 3. The first part of the model is a first discriminator (discriminator 1), which is copied after the parameters are updated by the generator model in each training process, and when the updating weight is propagated reversely by the whole discriminator, the weight of the part of the model is kept unchanged, and the structure of the part of the model is consistent with that of the generator model; the second part of the model is a second discriminator (discriminator 2), which is a typical CNN structure as shown in FIG. 8, and updates the weights of the part of the model when the whole discriminator is propagated reversely, the second discriminator (discriminator 2 comprises the following modules in sequence: six repeated convolution blocks, one resize operation block, and three fully connected layers, wherein each convolution block comprises one convolution layer and one free _ relu activation layer, and the convolution kernel parameters of the convolution layers of the six convolution blocks are respectively 92,42,52,42,52,42
The input of the discriminator model is a paired T2 image patch and a generated DWI image patch or a real DWI image patch, the T2 image patch is subtracted from the generated DWI image patch/the real DWI image patch after entering the first discriminator (discriminator 1), the subtracted result is divided into two branches, one branch holds the subtracted result, the other branch takes an absolute value of the subtracted result and then passes through three full-connected layers, parameters of the three full-connected layers are respectively N × 64, 64 × 5 and 5 × 1, wherein N is 64 × 64 × 5, and it needs to be described that the weight of the full-connected layer needs to be updated in the back propagation process. The results of the two branches are multiplied, the multiplied result is further input to a second discriminator (discriminator 2), the output of the second discriminator (discriminator 2) is 1 for the true pair (T2 image patch and true DWI image patch), and the output of the second discriminator (discriminator 2) is 0 for the false pair (T2 image patch and generated DWI image patch).
The following details the loss function of the improved countermeasure network model T22 DWI-GAN:
the loss function uses three loss functions, respectively: a GAN loss function, an L2 loss function, and a gradient difference loss function.
1. Loss function with respect to GAN
The GANs are different from many other models, and the GANs needs to run two optimization algorithms simultaneously during training, so that an optimizer needs to be defined for the discriminator and the generator respectively, one is used for minimizing the loss of the discriminator, and the other is used for minimizing the loss of the generator, namely the loss of the generator is minimized
LossD_GAN+LossG_GAN (1)
a) Arbiter GAN loss function
During training, half of the data input by the discriminator comes from real training data, and the other half comes from false images generated by the generator. In the training process, for real data, the arbiter tries to assign a probability close to 1 to the real data; and for the image generated by the generator, the discriminator assigns a probability close to 0.
The arbiter GAN loss function is as follows:
LossD_GAN(X,Y)=LBCE(D(Y),1)+LBCE(D(G(X)),0) (2)
where X is the source input image, Y is the corresponding target image, G (X) is the image generated by the generator, and D (-) calculates the probability that the discriminator input is "true". Wherein L isBCEIs Binary Cross Entropy (Binary Cross Entropy) defined as follows:
Figure BDA0002822906090000081
wherein Y represents the label of the input data, and takes a value of 0, 1 (i.e. 0 represents the generated image, 1 is the real image),
Figure BDA0002822906090000082
the probability of the input obeying the real image distribution is judged by the expression discriminator, and the value range is [0, 1%]. The two-class cross entropy describes the distance between two probability distributions, and the smaller the two-class cross entropy, the closer the two probability distributions are.
b) Generator GAN loss function
The generator does the opposite thing to the arbiter and the generator outputs samples that enable the arbiter to approximate the result of the arbiter to probability 1.
The generator GAN loss function is as follows:
LossG_GAN(X)=LBCE(D(G(X)),1) (4)
as the training proceeds, the discriminators continue to enhance their own discriminative power, while the generators continue to generate more and more realistic outputs, so that the discriminators are confused. In theory, the final generator and arbiter will achieve an equilibrium "nash equilibrium".
2) L2 loss function
The L2 loss function is specifically defined as follows:
Figure BDA0002822906090000091
where Y represents the true target image and G (X) is the target image generated from the source image X by the generator G.
3) Gradient difference loss function
Since the L2 loss function may generate a blurred image, the present solution uses a gradient difference loss function as an additional term, which is specifically defined by the following formula:
Figure BDA0002822906090000092
where Y is the true target image,
Figure BDA0002822906090000093
to generate a target image for the generator. The gradient difference loss function minimizes the difference in gradient magnitude between the ground truth target image and the generator generated target image while keeping the generator generated target image as large as possible in areas with strong gradients (i.e., edges) to effectively compensate for the blurring caused by the L2 loss function.
By combining the GAN loss function, the L2 loss function and the gradient difference loss function, the final generator and the discriminator loss function are respectively:
Figure BDA0002822906090000094
LossD(X,Y)=LossD_GAN(X,Y) (8)
wherein λ is1,λ2,λ3Is coefficient, LossG_GAN
Figure BDA0002822906090000096
LossD_GANThe definitions of (X, Y) are shown in formulas (2) to (6), respectively.
A system for generating a DWI image based on a T2 image, as shown in fig. 9, comprising:
a first data set constructing module 901, configured to acquire a T2 image with and without a lesion and a DWI image paired with the T2 image, and form a data pair by one-to-one correspondence between the T2 image and the DWI image to form a first data set;
a first preprocessing module 902, configured to preprocess the first data set to obtain a second data set;
a first image cutting module 903, configured to cut an image in the second data set into a patch to obtain a third data set;
a first training module 904, configured to train an improved confrontation network model by using the third data set, so as to obtain the trained confrontation network model, where the improved confrontation network model includes a generator, a first discriminator, and a second discriminator;
an image obtaining module 905, configured to obtain a T2 image to be processed, which is recorded as a first image;
a second preprocessing module 906, configured to preprocess the first image to obtain a second image;
a second image cutting module 907, configured to cut the second image into a patch to obtain a third image;
a countermeasure network module 908, configured to input the third image into the trained countermeasure network model, so as to obtain a patch of a DWI image;
and an image splicing module 909, configured to splice and combine the patches of the DWI image to obtain a DWI image.
The invention also discloses the following technical effects:
the invention adopts a convolutional neural network structure based on the GAN idea, namely T22 DWI-GAN. Compared with a general generation countermeasure network, the discriminator mainly comprises two parts, wherein the first part is the discriminator 1 which is copied by the generator, and the model weight of the part is fixed and is not updated when the part is reversely transmitted; the second part is the discriminator 2, which is a simple typical CNN structure that updates the model weights normally when propagating in the reverse direction. The design can effectively improve the extraction and utilization efficiency of the features and keep the actual training cost unchanged. The analysis speed of the scheme is high, the generalization capability is strong, the peak signal-to-noise ratio is improved by 6% -10% compared with the traditional GAN, the effect is better and more stable, and the method has practicability and usability.
According to the invention, the complete image is cut into the overlapped patch for training in the data preprocessing stage, so that the requirement on video memory can be reduced, the data enhancement can be realized and the training effect can be improved.
According to the invention, multi-center training samples are adopted during model training, the difference among the multi-center samples is large, the trained model has good generalization and wider applicable data range, and the existing T2 image information can be used for generating the DWI image information corresponding to the image information without actually scanning the DWI, so that the image quality of the DWI is improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A method of generating a DWI image based on a T2 image, comprising:
acquiring a T2 image with and without a focus and a DWI image matched with the T2 image, and forming a data pair by one-to-one correspondence of the T2 image and the DWI image to form a first data set;
preprocessing the first data set to obtain a second data set;
cutting the image in the second data set into a patch to obtain a third data set;
training an improved confrontation network model by using the third data set to obtain the trained confrontation network model, wherein the improved confrontation network model comprises a generator, a first discriminator and a second discriminator;
acquiring a T2 image to be processed, and recording the T2 image as a first image;
preprocessing the first image to obtain a second image;
cutting the second image into a patch to obtain a third image;
inputting the third image into the trained confrontation network model to obtain a patch of the DWI image;
and splicing and combining the patches of the DWI image to obtain the DWI image.
2. The method for generating a DWI image based on a T2 image according to claim 1, wherein the pre-processing includes image registration, skull stripping, image data normalization and image resizing.
3. The method of generating a DWI image based on a T2 image according to claim 1, wherein the cutting the image in the second data set into patches comprises cutting the image in the second data set into patches, where pixels at an interface of two adjacent patches in the patches overlap.
4. A method of generating a DWI image on the basis of a T2 image according to claim 1 or 3, characterized in that the patch size is 64 x 5.
5. The method for generating a DWI image based on a T2 image according to claim 1, wherein the training of the improved confrontation network model with the third dataset includes: the improved countermeasure network model is trained through a back propagation and gradient descent algorithm.
6. A method of generating a DWI image on the basis of a T2 image as claimed in claim 1, characterized in that the generator employs a UNet structure.
7. A method of generating a DWI image on the basis of a T2 image according to claim 1, characterized in that the first discriminator is copied after updating parameters by the generator, and the first discriminator weight is kept unchanged during back-propagation of the updated weights.
8. A method of generating a DWI image on the basis of a T2 image as claimed in claim 1, characterized in that the second discriminator employs a CNN structure.
9. A system for generating a DWI image based on a T2 image, comprising:
the first data set construction module is used for acquiring a T2 image with and without a focus and a DWI image matched with the T2 image, and the T2 image and the DWI image are in one-to-one correspondence to form a data pair to form a first data set;
the first preprocessing module is used for preprocessing the first data set to obtain a second data set;
the first image cutting module is used for cutting the image in the second data set into a patch to obtain a third data set;
the first training module is used for training an improved confrontation network model by utilizing the third data set to obtain the trained confrontation network model, and the improved confrontation network model comprises a generator, a first discriminator and a second discriminator;
the image acquisition module is used for acquiring a T2 image to be processed and recording the T2 image as a first image;
the second preprocessing module is used for preprocessing the first image to obtain a second image;
the second image cutting module is used for cutting the second image into a patch to obtain a third image;
the confrontation network module is used for inputting the third image into the trained confrontation network model to obtain a patch of the DWI image;
and the image splicing module is used for splicing and combining the patches of the DWI image to obtain the DWI image.
10. A system for generating DWI images based on T2 images according to claim 9, characterized in that the first and second pre-processing modules comprise an image registration unit, a skull stripping unit, an image data normalization unit and an image resizing unit.
CN202011422209.9A 2020-12-08 2020-12-08 Method and system for generating DWI image based on T2 image Pending CN112529949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422209.9A CN112529949A (en) 2020-12-08 2020-12-08 Method and system for generating DWI image based on T2 image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422209.9A CN112529949A (en) 2020-12-08 2020-12-08 Method and system for generating DWI image based on T2 image

Publications (1)

Publication Number Publication Date
CN112529949A true CN112529949A (en) 2021-03-19

Family

ID=74998117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422209.9A Pending CN112529949A (en) 2020-12-08 2020-12-08 Method and system for generating DWI image based on T2 image

Country Status (1)

Country Link
CN (1) CN112529949A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758035A (en) * 2022-06-13 2022-07-15 之江实验室 Image generation method and device for unpaired data set

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108577940A (en) * 2018-02-11 2018-09-28 苏州融准医疗科技有限公司 A kind of targeting guiding puncture system and method based on multi-modality medical image information
CN109741379A (en) * 2018-12-19 2019-05-10 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109829894A (en) * 2019-01-09 2019-05-31 平安科技(深圳)有限公司 Parted pattern training method, OCT image dividing method, device, equipment and medium
CN110544275A (en) * 2019-08-19 2019-12-06 中山大学 Methods, systems, and media for generating registered multi-modality MRI with lesion segmentation tags
US20200101319A1 (en) * 2018-09-27 2020-04-02 Varian Medical Systems International Ag Systems, methods and devices for automated target volume generation
CN111563841A (en) * 2019-11-13 2020-08-21 南京信息工程大学 High-resolution image generation method based on generation countermeasure network
CN111862261A (en) * 2020-08-03 2020-10-30 北京航空航天大学 FLAIR modal magnetic resonance image generation method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108577940A (en) * 2018-02-11 2018-09-28 苏州融准医疗科技有限公司 A kind of targeting guiding puncture system and method based on multi-modality medical image information
US20200101319A1 (en) * 2018-09-27 2020-04-02 Varian Medical Systems International Ag Systems, methods and devices for automated target volume generation
CN109741379A (en) * 2018-12-19 2019-05-10 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109829894A (en) * 2019-01-09 2019-05-31 平安科技(深圳)有限公司 Parted pattern training method, OCT image dividing method, device, equipment and medium
CN110544275A (en) * 2019-08-19 2019-12-06 中山大学 Methods, systems, and media for generating registered multi-modality MRI with lesion segmentation tags
CN111563841A (en) * 2019-11-13 2020-08-21 南京信息工程大学 High-resolution image generation method based on generation countermeasure network
CN111862261A (en) * 2020-08-03 2020-10-30 北京航空航天大学 FLAIR modal magnetic resonance image generation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758035A (en) * 2022-06-13 2022-07-15 之江实验室 Image generation method and device for unpaired data set
CN114758035B (en) * 2022-06-13 2022-09-27 之江实验室 Image generation method and device for unpaired data set

Similar Documents

Publication Publication Date Title
Costa et al. End-to-end adversarial retinal image synthesis
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN114581662B (en) Brain tumor image segmentation method, system, device and storage medium
CN113012172A (en) AS-UNet-based medical image segmentation method and system
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN116309648A (en) Medical image segmentation model construction method based on multi-attention fusion
Fei et al. Deep learning‐based multi‐modal computing with feature disentanglement for MRI image synthesis
CN116612174A (en) Three-dimensional reconstruction method and system for soft tissue and computer storage medium
CN115578406B (en) CBCT jaw bone region segmentation method and system based on context fusion mechanism
CN112819914A (en) PET image processing method
CN113506222A (en) Multi-mode image super-resolution method based on convolutional neural network
Zhang et al. PTNet3D: A 3D high-resolution longitudinal infant brain MRI synthesizer based on transformers
Xu et al. Infrared and visible image fusion using a deep unsupervised framework with perceptual loss
Do et al. 7T MRI super-resolution with Generative Adversarial Network
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
CN112529949A (en) Method and system for generating DWI image based on T2 image
CN116823908A (en) Monocular image depth estimation method based on multi-scale feature correlation enhancement
Liu et al. Progressive residual learning with memory upgrade for ultrasound image blind super-resolution
Aarti et al. Super-Resolution with Deep Learning Techniques: A Review
CN117576250B (en) Rapid reconstruction method and system for prospective undersampled MRI Dixon data
CN117934519B (en) Self-adaptive segmentation method for esophageal tumor CT image synthesized by unpaired enhancement
CN117934289B (en) System and method for integrating MRI super-resolution and synthesis tasks
Li et al. Super-Resolution of Magnetic Resonance Images Acquired Under Clinical Protocols using Deep Attention-based Method
CN113538238B (en) High-resolution photoacoustic image imaging method and device and electronic equipment
Kim Improving Knee Cartilage Segmentation using Deep Learning-based Super-Resolution Methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination