CN112785540A - Generation system and method of diffusion weighted image - Google Patents

Generation system and method of diffusion weighted image Download PDF

Info

Publication number
CN112785540A
CN112785540A CN202110136563.3A CN202110136563A CN112785540A CN 112785540 A CN112785540 A CN 112785540A CN 202110136563 A CN202110136563 A CN 202110136563A CN 112785540 A CN112785540 A CN 112785540A
Authority
CN
China
Prior art keywords
image
feature
diffusion
diffusion weighted
subunit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110136563.3A
Other languages
Chinese (zh)
Other versions
CN112785540B (en
Inventor
胡磊
周大为
赵俊功
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sixth Peoples Hospital
Original Assignee
Shanghai Sixth Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sixth Peoples Hospital filed Critical Shanghai Sixth Peoples Hospital
Publication of CN112785540A publication Critical patent/CN112785540A/en
Application granted granted Critical
Publication of CN112785540B publication Critical patent/CN112785540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention provides a system and a method for generating a diffusion weighted image, which relate to the technical field of image synthesis and comprise the following steps: the image acquisition module is used for acquiring a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, matching the first diffusion weighted images and the second diffusion weighted images one by one to form an image pair, and adding the image pair into a data set; the model training module is used for taking the first diffusion weighted image in the data set as input, taking the corresponding second diffusion weighted image as output and obtaining an image synthesis model by adopting pre-established supervision network training; and the image generation module is used for inputting the diffusion weighted image with the first diffusion sensitivity coefficient into the image synthesis model to obtain a diffusion weighted image with a second diffusion sensitivity coefficient. The scanning device has the beneficial effects that the scanning time and the software and hardware requirements of the scanning device are greatly reduced, and the economic cost and the time cost are effectively saved.

Description

Generation system and method of diffusion weighted image
Technical Field
The invention relates to the technical field of image synthesis, in particular to a system and a method for generating a diffusion weighted image.
Background
Diffusion Weighted Imaging (DWI) is an important component of Magnetic Resonance Imaging (MRI) examination of prostate, and can improve the detection rate and qualitative diagnosis level of prostate cancer. The value b, i.e. the diffusion sensitivity coefficient, is a diffusion sensitive gradient field parameter applied in the diffusion weighted imaging process, the higher the value b is, the more sensitive it is to water molecule diffusion, and the higher the value b is (i.e. b > 1000 s/mm)2) Dwi (high b value dwi) is considered to be an important indicator for the detection of prostate cancer. The traditional high b-value DWI is obtained by directly using echo-planar imaging (EPI) high b-value DWI sequences, and has the defects of low spatial resolution, poor signal-to-noise ratio, more artifacts and long scanning time. Another method for obtaining a DWI image with a high b value is to calculate a DWI (a calculated DWI, C-DWI), and the method obtains the DWI image with the high b value by mainly fitting the DWI image with a lower b value, so that the method improves the image quality to a certain extent and shortens the scanning time. However, this method usually requires at least two or more low b-value images to be obtained, and the image quality of the method also depends on the quality of the low b-value images for synthesis. In recent years, due to the application of a small-field-of-view technology (zoomed-FOV), the DWI image quality obtained by the two methods is effectively improved, and particularly, the DWI technology (high-b-value ZOOMit-DWI) based on the small-field-of-view technology is widely concerned in a shorter time due to higher image quality.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a diffusion weighted image generation system, which specifically comprises:
the image acquisition module is used for acquiring a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, wherein the first diffusion weighted images correspond to the second diffusion weighted images one by one;
the image acquisition module matches the first diffusion weighted image with the corresponding second diffusion weighted image one by one to form an image pair, and adds each image pair into a data set;
the second diffusion susceptibility is higher than the first diffusion susceptibility;
the model training module is connected with the image acquisition module and used for taking the first diffusion weighted image in the data set as input, taking the corresponding second diffusion weighted image as output and obtaining an image synthesis model by adopting a pre-established supervision network for training;
and the image generation module is connected with the model training module and used for inputting the diffusion weighted image with the first diffusion sensitivity coefficient into the image synthesis model and generating a synthesized diffusion weighted image with the second diffusion sensitivity coefficient.
Preferably, the image processing device further comprises an image post-processing module, connected to the image generation module, and configured to perform edge enhancement and/or denoising on the generated diffusion weighted image with the second diffusion sensitivity coefficient.
Preferably, the supervision network has a first loss function, a second loss function and a third loss function which are constructed in advance;
the model training module comprises:
the selecting submodule is used for selecting a plurality of image pairs from the data set to form a reference set, and extracting data from the data set to obtain a training set;
the processing submodule is used for constructing an image synthesis network according to preset hyper-parameters, performing iterative training on the image synthesis network according to the training set, and respectively inputting the first diffusion weighted images in each image pair into the image synthesis network after each training is finished to obtain a synthesized image;
a supervision submodule, respectively connected to the selection submodule and the processing submodule, configured to input the synthesized image, each of the first diffusion weighted images in the training set, and the reference set into the supervision network for learning, and output a first loss value calculated according to the first loss function, a second loss value calculated according to the second loss function, and a third loss value calculated according to the third loss function;
and the updating sub-module is respectively connected with the processing sub-module and the monitoring sub-module and is used for adjusting the hyper-parameter according to the first loss value, the second loss value and the third loss value after each training is finished, and obtaining the image synthesis model after all training is finished.
Preferably, the training set includes a plurality of image pairs extracted from the data set, the model training module trains the image synthesis model in a full supervised learning manner, and the supervision network includes a first supervision network associated with the first loss function, a second supervision network associated with the second loss function, and a third supervision network associated with the third loss function;
the supervision submodule then comprises:
a first supervision unit, configured to, for each of the image pairs, input the composite image and the corresponding second diffusion weighted image into the first supervision network for counterlearning to obtain the first loss value;
a second supervision unit, configured to, for each of the image pairs, input the synthesized image, the corresponding first diffusion weighted image, and the reference set into the second supervision network for feature learning to obtain the second loss value;
and the third supervision unit is used for inputting the synthetic image and the corresponding second diffusion weighted image into the third supervision network for feature learning to obtain the third loss value aiming at each image pair.
Preferably, if the training set includes a plurality of first diffusion weighted images extracted from the data set, the model training module trains in a semi-supervised learning manner to obtain the image synthesis model, and the supervision networks include a first supervision network associated with the first loss function, a second supervision network associated with the first loss function, and a third supervision network associated with the first loss function;
the supervision submodule then comprises:
a fourth supervising unit, configured to, for each of the first diffusion weighted images, input the composite image and any one of the second diffusion weighted images in the reference set into the first supervising network for counterlearning to obtain the first loss value;
a fifth supervision unit, which inputs the synthesized image, the corresponding first diffusion weighted image and the reference set into the second supervision network for feature learning to obtain the second loss value;
and the sixth supervision unit is used for inputting the synthetic image and the corresponding first diffusion weighted image into the third supervision network to carry out feature learning so as to obtain the third loss value.
Preferably, the second supervision unit includes:
a first identifying subunit, configured to respectively process, according to a first feature identification model generated in advance, to obtain a first feature image of the composite image at least one preset first feature level, a second feature image of the first diffusion weighted image at the first feature level, a third feature image of the first diffusion weighted image of each image pair in the reference set at the first feature level, and a fourth feature image of the second diffusion weighted image of each image pair at the first feature level;
the first block sub-unit is connected with the first identification sub-unit and is used for respectively carrying out image segmentation on each second characteristic image, each third characteristic image and each fourth characteristic image to obtain a plurality of characteristic blocks;
a first matching subunit, connected to the first blocking subunit, and configured to match, for each first feature level, each feature block of the second feature image with each feature block of each third feature image, and obtain a position coordinate of the feature block of each third feature image with a highest matching degree;
the first splicing subunit is respectively connected with the first blocking subunit and the first matching subunit, acquires each feature block of each fourth feature image corresponding to each third feature image according to the position coordinates, and performs image splicing on each feature block of each fourth feature image to obtain a first spliced feature image;
and the first calculating subunit is respectively connected to the first identifying subunit and the first splicing subunit, and is configured to respectively calculate a first pixel horizontal distance between the first feature image and the first spliced feature image corresponding to each first feature level, and average the first pixel horizontal distances to obtain a first average pixel level, which is output as the second loss value.
Preferably, the third supervision unit comprises:
the second identification subunit is used for respectively processing according to a second feature identification model generated in advance to obtain a fifth feature image of the composite image at a preset at least one second feature level and a sixth feature image of the second diffusion weighted image at the second feature level;
and the second calculating subunit is connected to the second identifying unit, and is configured to calculate and obtain a second horizontal pixel distance between the fifth feature image and the sixth feature image corresponding to each second feature level, and average the second horizontal pixel distances to obtain a second average pixel level, which is output as the third loss value.
Preferably, the fifth supervision unit includes:
a third identifying subunit, configured to respectively process, according to a third feature identification model generated in advance, to obtain a seventh feature image of the composite image at least one preset third feature level, an eighth feature image of the first diffusion weighted image at the third feature level, a ninth feature image of the first diffusion weighted image at the third feature level of each image pair in the reference set, and a tenth feature image of the second diffusion weighted image of each image pair at the first feature level;
the second partitioning subunit is connected to the third identifying subunit and configured to perform image segmentation on each of the eighth feature image, the ninth feature image, and the tenth feature image to obtain a plurality of feature blocks;
a second matching subunit, connected to the second blocking subunit, and configured to match, for each third feature level, each feature block of the eighth feature image with each feature block of each ninth feature image, respectively, and obtain a position coordinate of the feature block of each ninth feature image with a highest matching degree;
the second splicing subunit is respectively connected with the second blocking subunit and the second matching subunit, acquires each feature block of each tenth feature image corresponding to each ninth feature image according to the position coordinates, and performs image splicing on each feature block of each tenth feature image to obtain a second spliced feature image;
and the third calculating subunit is respectively connected with the third identifying subunit and the second splicing subunit, and is configured to respectively calculate a third pixel horizontal distance between the seventh feature image and the second spliced feature image corresponding to each third feature level, and average the third pixel horizontal distances to obtain a third average pixel level, which is output as the second loss value.
Preferably, the sixth supervision unit includes:
a fourth identification subunit, configured to respectively process, according to a fourth feature identification model generated in advance, to obtain an eleventh feature image of the composite image at least one preset fourth feature level and a twelfth feature image of the first diffusion weighted image at the fourth feature level;
and the fourth calculating subunit is connected to the fourth identifying subunit, and is configured to calculate and obtain a fourth pixel horizontal distance between the eleventh feature image and the twelfth feature image corresponding to each fourth feature level, and obtain a fourth average pixel level by averaging the fourth pixel horizontal distances, and output the fourth average pixel level as the third loss value.
A method for generating a diffusion-weighted image, the system for generating a diffusion-weighted image according to any one of the above paragraphs, the method specifically comprising:
step S1, the generating system acquires a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, the first diffusion weighted images and the second diffusion weighted images are in one-to-one correspondence, the first diffusion weighted images and the corresponding second diffusion weighted images are matched one-to-one to form image pairs, and each image pair is added into a data set;
step S2, the generating system takes the first diffusion weighted image in the data set as input, takes the corresponding second diffusion weighted image as output, and obtains an image synthesis model by adopting pre-established supervision network training;
step S3, the generating system inputs the diffusion-weighted image with the first diffusion sensitivity coefficient into the image synthesis model, and obtains a synthesized diffusion-weighted image with the second diffusion sensitivity coefficient.
The technical scheme has the following advantages or beneficial effects:
1) the acquired standard b-value diffusion weighted image can be processed to obtain a high-quality high-b-value diffusion weighted image according to the generated image synthesis model, so that the scanning time and the software and hardware requirements of scanning equipment are greatly reduced, and the economic cost and the time cost are effectively saved;
2) the image synthesis model obtained by training on a certain type of image has a good synthesis effect on other similar type of images, and a user can replace the reference set data and the training set data according to the frame according to own data and requirements to generate a corresponding image synthesis model, so that certain flexibility is achieved.
Drawings
FIG. 1 is a schematic diagram of a diffusion weighted image generation system according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of an image synthesis network according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a network of discriminators according to a preferred embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for generating a diffusion weighted image according to a preferred embodiment of the present invention;
FIG. 5 is a diagram illustrating image composition results according to a preferred embodiment of the present invention;
FIG. 6 is a diagram illustrating an image composition result according to another preferred embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present invention is not limited to the embodiment, and other embodiments may be included in the scope of the present invention as long as the gist of the present invention is satisfied.
In a preferred embodiment of the present invention, based on the above problems in the prior art, there is provided a system for generating a diffusion weighted image, as shown in fig. 1, specifically including:
the image acquisition module 1 is used for acquiring a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, wherein the first diffusion weighted images correspond to the second diffusion weighted images one by one;
the image acquisition module matches the first diffusion weighted image with the corresponding second diffusion weighted image one by one to form an image pair, and adds each image pair into a data set;
the second diffusion sensitivity coefficient is higher than the first diffusion sensitivity coefficient;
the model training module 2 is connected with the image acquisition module 1 and is used for taking the first diffusion weighted image in the data set as input, taking the corresponding second diffusion weighted image as output and obtaining an image synthesis model by adopting a pre-established supervision network for training;
and the image generation module 3 is connected with the model training module 2 and is used for inputting the diffusion weighted image with the first diffusion sensitivity coefficient into the image synthesis model and generating a synthesized diffusion weighted image with the second diffusion sensitivity coefficient.
Specifically, in this embodiment, the image synthesis model is obtained by adopting supervised network training, and through the image synthesis model, only the diffusion weighted image with the first diffusion sensitivity coefficient is used to quickly obtain the diffusion weighted image with the image quality similar to or even higher than the second diffusion sensitivity coefficient, which is higher than the first diffusion sensitivity coefficient, that is, the diffusion weighted image with a relatively high b value can be quickly obtained through the diffusion weighted image with a relatively low b value, so that not only is the scanning time greatly reduced, but also the requirements on equipment, software and hardware are reduced, and a high-quality DWI image with a relatively high b value can be obtained through relatively low economic cost and time cost clinically. The lower b value is a standard b value, and the diffusion sensitivity coefficient is 800-2The diffusion sensitivity coefficient of the higher b value is 2000s/mm2
More specifically, before model training, images, namely the first diffusion weighted image and the second diffusion weighted image, need to be acquired first, and the ZOOMit-DWI sequence (siemens) only excites a small region of interest with two-dimensional selectivity to remove kinks and artifacts, so as to obtain DWI images with higher quality and resolution, and the method is gradually applied to clinical applications. The first diffusion weighted image preferably has a first diffusion sensitivity of 1000s/mm2Preferably, the second diffusion weighted image has a second diffusion sensitivity of 2000s/mm2ZOOMit-DWI image of (1). More preferably, the first expanderThe scatter-weighted image and the second diffusion-weighted image are images of 100 pixels by 100 pixels cut out from the center to the periphery of the original diffusion-weighted image of 112 pixels by 200 pixels, and then transformed into images of 224 pixels by 224 pixels. Obtaining a plurality of one-to-one corresponding 1000s/mm2ZOOMit-DWI image and 2000s/mm2The ZOOMit-DWI image is used as a reference set, and a plurality of 1000s/mm images which correspond to one another are obtained in a full-supervised learning mode2ZOOMit-DWI image and 2000s/mm2The ZOOMit-DWI image is used as a full-supervised learning training set, has no direct corresponding relation with the image in the reference set, and a plurality of 1000s/mm images are obtained for a semi-supervised learning mode2The ZOOMit-DWI image is used as a semi-supervised learning training set, and has no direct corresponding relation with the image in the reference set.
After the training set is obtained, a pre-established supervision network is adopted for training to obtain an image synthesis model which takes a first diffusion weighted image with a first diffusion sensitivity coefficient as input and a second diffusion weighted image with a second diffusion sensitivity coefficient as output, so that the diffusion weighted image with the second diffusion sensitivity coefficient is generated according to the diffusion weighted image with the first diffusion sensitivity coefficient for use.
In a preferred embodiment of the present invention, the image post-processing module 4 is further included, connected to the image generating module 3, and configured to perform edge enhancement and/or denoising on the generated diffusion weighted image with the second diffusion sensitivity coefficient.
In a preferred embodiment of the present invention, the supervisory network has a first loss function, a second loss function and a third loss function which are pre-constructed;
the model training module 2 includes:
the selecting submodule 21 is used for selecting a plurality of image pairs from the data set to form a reference set, and extracting data from the data set to obtain a training set;
the processing submodule 22 is configured to construct an image synthesis network according to a preset hyper-parameter, perform iterative training on the image synthesis network according to a training set, and input the first diffusion weighted images in each image pair into the image synthesis network respectively after each training is finished to obtain a synthesized image;
the supervision submodule 23 is connected to the selection submodule 21 and the processing submodule 22, and is configured to input the synthesized image, each first diffusion weighted image in the training set, and the reference set into a supervision network for learning, and output a first loss value calculated according to the first loss function, a second loss value calculated according to the second loss function, and a third loss value calculated according to the third loss function;
and the updating submodule 24 is respectively connected with the processing submodule 22 and the monitoring submodule 23, and is used for adjusting the hyper-parameter according to the first loss value, the second loss value and the third loss value after each training is finished, and obtaining an image synthesis model after all training is finished.
Specifically, in this embodiment, each diffusion weighted image in the reference set is of the same type, such as the same imaging technology or the same scanning machine, each diffusion weighted image in the training set is of the same type, such as the same imaging technology or the same scanning machine, but the diffusion weighted images in the reference set and the diffusion weighted images in the training set may not be of the same type, that is, they are obtained by different imaging technologies or by scanning the same scanning machine.
The image synthesis network comprises three convolutional layers, namely a first convolutional layer 101, a second convolutional layer 102 and a third convolutional layer 103, five residual layers 300 and three deconvolution layers, namely a first deconvolution layer 201, a second deconvolution layer 202 and a third deconvolution layer 203, as shown in fig. 2. The convolutional layers are connected in sequence, the deconvolution layers are connected in sequence, except for the third deconvolution layer 203, a corresponding instantiation regular layer and a ReLU activation layer 400 are connected behind each convolutional layer and each deconvolution layer, the output of the regular layer and the ReLU activation layer 400 connected behind the first convolutional layer 101 and the output of the residual layer 300 are used as the input of the first deconvolution layer 201, the output of the regular layer and the ReLU activation layer 400 connected behind the second convolutional layer 102 and the output of the regular layer and the ReLU activation layer 400 connected behind the first deconvolution layer 201 are used as the input of the second deconvolution layer 202, and the output of the regular layer and the ReLU activation layer 400 connected behind the third convolutional layer 103 and the output of the regular layer and the ReLU activation layer 400 connected behind the second deconvolution layer 202 are used as the input of the third deconvolution layer 203.
More preferably, the input channel of the first convolutional layer 101 is 3, the output channel is 32, the input channel of the second convolutional layer 102 is 32, the output channel is 64, the input channel of the third convolutional layer 103 is 64, the output channel is 128, the input channels of the residual layers 300 are 128, the output channels are 128, the input channel of the first anti-convolutional layer 201 is 256, the output channel is 64, the input channel of the second anti-convolutional layer 202 is 128, the output channel is 32, the input channel of the third anti-convolutional layer 203 is 64, and the output channel is 3.
In a preferred embodiment of the present invention, the training set includes a plurality of image pairs extracted from the data set, and the model training module 2 performs training in a full supervised learning manner to obtain an image synthesis model, where the supervision network includes a first supervision network associated with a first loss function, a second supervision network associated with a second loss function, and a third supervision network associated with a third loss function;
the supervision submodule 23 comprises:
a first supervision unit 231, configured to, for each image pair, input the synthesized image and the corresponding second diffusion weighted image into a first supervision network for counterlearning to obtain a first loss value;
a second supervision unit 232, configured to input, for each image pair, the synthesized image and the corresponding first diffusion weighted image, and the reference set into a second supervision network for feature learning to obtain a second loss value;
and a third supervision unit 233, configured to, for each image pair, input the synthesized image and the corresponding second diffusion weighted image into a third supervision network for feature learning to obtain a third loss value.
Specifically, in this embodiment, when the number of each image pair in the training set is sufficiently large, the image synthesis model is preferably obtained by training in a fully supervised learning manner according to a supervised network. The first monitoring network is preferably an sa (synthesis adaptation) module, the second monitoring network is preferably a pfr (pseudo Feature registration) module, and the third monitoring network is preferably an frcv (Feature registration verification) module.
More specifically, before model training, an image synthesis network is constructed, and the image synthesis network can have a first diffusion sensitivity coefficient of 1000s/mm for input2The ZOOMit-DWI image processing of the same obtains a similar second diffusion sensitivity coefficient of 2000s/mm2I.e. inputting the first diffusion weighted image in the training set into the image synthesis network can obtain the corresponding synthesized image.
In the model training process, a synthetic image output by the image synthetic network obtained by each training and a second diffusion weighted image in an image pair associated with a first diffusion weighted image corresponding to the synthetic image are used as input data of a first supervision network, wherein the second diffusion weighted image is the diffusion sensitivity coefficient of 2000s/mm corresponding to the first diffusion weighted image2Of the real image.
The first supervision network comprises a discriminator network, the discriminator is enabled to improve the capability of discriminating whether the synthesized image is true or false from the real image through counterlearning of the image synthesis network and the discriminator network, the generator is enabled to be promoted to deceive the discriminator, and the first supervision network has a first loss function L which is constructed in advancesa. In the supervised learning mode, the sensitivity coefficient of the discriminator to the true diffusion is 2000s/mm2Learning the ZOOMit-DWI image of the diffusion sensitivity coefficient to improve the self ability, and finally outputting the image according to a first loss function LsaCalculating to obtain a first loss value;
before the model training, a plurality of image pairs are randomly selected from the whole training set to serve as a reference set R, in the model training process, the images obtained by each training are synthesized into a synthesized image output by a network, the first diffusion weighted image in the training set corresponding to the synthesized image is used as input data of a second supervision network, and the reference set R is used as input data of the second supervision network.
The second supervision network has a second loss function L constructed in advancepfrThe second supervision network comprises a first feature recognition model, and features of different first feature levels (taking output features of each layer of activation layer) can be extracted from the first diffusion weighted image in the training set and all the first diffusion weighted images in the reference set R in the input data through the first feature recognition model and are segmented into feature blocks with the size of k × k. And on each feature level, carrying out close matching on the feature blocks of the first diffusion weighted images in the training set and the feature blocks of all the first diffusion weighted images in the reference set, and enabling each feature block of the first diffusion weighted images in the training set to find a corresponding one in all the feature blocks of all the first diffusion weighted images in the reference set R and obtain the position coordinates of the corresponding one. And extracting each corresponding feature block of the corresponding second diffusion weighted image in the reference set R through the first recognition model according to the position coordinate, and forming a potential feature representation on each first feature level, namely a first spliced feature image. Similarly, feature blocks on each feature level are extracted from the synthesized image, the horizontal distance between the feature blocks and the first pixel of the potential feature representation is calculated, and the average value is used as a second loss value to supervise the training of the image synthesis network. The first feature recognition model is a VGG-19 network, and the first feature level comprises level3, level4 and level 5.
Visually, the first pixel-level distance between the composite image and the first stitched feature image as the target image may be used for constraint, but it is not sufficient to verify the consistency of structural semantic features, nor is it suitable for semi-supervised learning lacking real images. The low-level features have better representation of color and texture information, and the high-level features are more robust to shape changes and geometric transformations. Therefore, instead of Mean Square Error (MSE) on the pixel level, an FRCV module, i.e. a third supervision network, is constructed.
In the model training process, the method further comprises the step of combining the synthesized image output by the image synthesis network obtained in each training and the second image in the image pair associated with the first diffusion weighted image corresponding to the synthesized imageTwo diffusion weighted images are used as input data of a third monitoring network, and the diffusion sensitivity coefficient corresponding to the second diffusion weighted image is 2000s/mm2The third supervision network has a third loss function L constructed in advancefrcv. And taking the real image as a target image, extracting the characteristics of a plurality of second characteristic levels of the synthetic image and the target image through a third supervision network, and calculating the distance to obtain a third loss value. The second feature levels include level1, level3, and level 5.
And after each training is finished, adjusting the hyper-parameter of the image synthesis network according to the first loss value, the second loss value and the third loss value, preferably, when the iterative training times reach a preset iterative total times, finishing all the training to obtain an image synthesis model.
Further preferably, the discriminator network of the first supervisory network is composed of a full convolutional network, as shown in fig. 3, and includes convolutional layers, active layers, and regular layers 400, where the convolutional layers include a fourth convolutional layer 104, a fifth convolutional layer 105, a sixth convolutional layer 106, a seventh convolutional layer 107, and an eighth convolutional layer 108 connected in sequence, and each convolutional layer is followed by a corresponding instantiation regular layer and a ReLU active layer 400 except the eighth convolutional layer 108.
Wherein the input channel of the fourth convolutional layer 104 is 3, the output channel is 32, the input channel of the fifth convolutional layer 105 is 32, the output channel is 64, the input channel of the sixth convolutional layer 106 is 64, the output channel is 128, the input channel of the seventh convolutional layer 107 is 128, the output channel is 256, the input channel of the eighth convolutional layer 108 is 256, and the output channel is 1.
In a preferred embodiment of the present invention, the training set includes a plurality of first diffusion weighted images extracted from the data set, and the model training module 2 performs training in a semi-supervised learning manner to obtain an image synthesis model, where the supervision networks include a first supervision network associated with the first loss function, a second supervision network associated with the first loss function, and a third supervision network associated with the first loss function;
the supervision submodule 23 comprises:
a fourth supervising unit 234, configured to, for each first diffusion weighted image, input the synthesized image and any one second diffusion weighted image in the reference set into the first supervising network for counterlearning to obtain a first loss value;
a fifth monitoring unit 235, which inputs the synthesized image, the corresponding first diffusion weighted image, and the reference set into a second monitoring network for feature learning to obtain a second loss value;
and a sixth supervision unit 236, which inputs the synthesized image and the corresponding first diffusion weighted image into a third supervision network for feature learning to obtain a third loss value.
Specifically, in this embodiment, when the number of each image pair in the data set is not many, only the first diffusion weighted image is used as a training set, and preferably a semi-supervised learning manner is used to train and obtain the image synthesis model, similarly, the supervision network includes a first supervision network, a second supervision network, and a third supervision network, and the first supervision network is preferably an sa (synthesis adaptation) module, the second supervision network is preferably a pfr (pseudo-motion replication) module, and the third supervision network is preferably an frcv (motion reconstruction) module. At this point, the PFR module and FRCV module are characterized and semi-supervised training is performed by generating tests of potential feature appearance and consistency.
Furthermore, the semi-supervised learning is similar to the training process of the supervised learning, and the main difference lies in that:
and for the first supervision network, in the model training process, the synthetic image output by the image synthetic network obtained by each training and a second diffusion weighted image randomly selected from the reference set R are used as input data of the first supervision network.
In the supervised learning, a discriminator in a first supervised network learns real images, and in the semi-supervised learning, the discriminator learns a second diffusion weighted image randomly selected from a reference set R to judge whether a synthesized image is approximate to the image style of the second diffusion weighted image or not, and finally outputs a first loss value.
For the second supervised network, the semi-supervised learning and the supervised learning are the same, there is no difference, and the description is omitted here, and the second supervised network also finally outputs the second loss value.
And for the third monitoring network, in the model training process, the synthetic image output by the image synthetic network obtained by each training and the first diffusion weighted image corresponding to the synthetic image are used as the input data of the third monitoring network.
Because the training set lacks a real second diffusion weighted image, the features of the composite image and the plurality of fourth feature levels of the first diffusion weighted image serving as the target image are extracted and the distance is calculated, the structural semantics is supervised, and finally a third loss value is obtained. The fourth feature level includes level2, level3, and level 5.
Similarly, after each training is finished, the hyper-parameters of the image synthesis network are adjusted according to the first loss value, the second loss value and the third loss value, and preferably, when the iterative training times reach the preset total iterative times, all training is finished to obtain the image synthesis model.
In a preferred embodiment of the present invention, the second monitoring unit 232 includes:
a first identifying subunit 2321, configured to respectively process, according to a first feature recognition model generated in advance, to obtain a first feature image of the synthesized image at least one preset first feature level, a second feature image of the first diffusion weighted image at the first feature level, a third feature image of the first diffusion weighted image of each image pair at the first feature level in the reference set, and a fourth feature image of the second diffusion weighted image of each image pair at the first feature level;
the first block-dividing subunit 2322 is connected to the first identifying subunit 2321, and is configured to perform image segmentation on each of the second feature image, the third feature image, and the fourth feature image to obtain a plurality of feature blocks;
a first matching subunit 2323, connected to the first blocking subunit 2322, configured to match, for each first feature level, each feature block of the second feature image with each feature block of each third feature image, and obtain a position coordinate of the feature block of each third feature image with the highest matching degree;
the first splicing subunit 2324 is connected to the first blocking subunit 2322 and the first matching subunit 2323, respectively, acquires each feature block of each fourth feature image corresponding to each third feature image according to the position coordinates, and performs image splicing on each feature block of each fourth feature image to obtain a first spliced feature image;
the first calculating subunit 2325 is connected to the first identifying subunit 2321 and the first splicing subunit 2324, and configured to calculate and obtain first pixel horizontal distances between the first feature image and the first spliced feature image corresponding to each first feature level, and obtain a first average pixel level by averaging the first pixel horizontal distances, and output the first average pixel level as a second loss value.
In a preferred embodiment of the present invention, the third monitoring unit 233 includes:
a second identifying subunit 2331, configured to respectively process, according to a second feature identification model generated in advance, to obtain a fifth feature image of the composite image at a preset at least one second feature level and a sixth feature image of the second diffusion weighted image at the second feature level;
a second calculating subunit 2332, connected to the second identifying unit 2331, and configured to calculate a second horizontal pixel distance between the fifth feature image and the sixth feature image corresponding to each second feature level, and average the second horizontal pixel distances to obtain a second average pixel level, which is output as a third loss value.
In a preferred embodiment of the present invention, the fifth monitoring unit 235 includes:
a third identifying subunit 2351, configured to respectively process, according to a third feature recognition model generated in advance, to obtain a seventh feature image of the synthesized image at least one preset third feature level, an eighth feature image of the first diffusion-weighted image at the third feature level, a ninth feature image of the first diffusion-weighted image at the third feature level in each image pair in the reference set, and a tenth feature image of the second diffusion-weighted image at the first feature level in each image pair;
the second block subunit 2352 is connected with the third identification subunit 2351 and is used for respectively performing image segmentation on each eighth feature image, each ninth feature image and each tenth feature image to obtain a plurality of feature blocks;
the second matching subunit 2353, connected to the second partitioning subunit 2352, is configured to match, for each third feature level, each feature block of the eighth feature image with each feature block of each ninth feature image, respectively, and obtain a position coordinate of the feature block of each ninth feature image with the highest matching degree;
the second splicing subunit 2354 is connected with the second partitioning subunit 2352 and the second matching subunit 2353 respectively, acquires each feature block of each tenth feature image corresponding to each ninth feature image according to the position coordinate, and performs image splicing on each feature block of each tenth feature image to obtain a second spliced feature image;
the third calculation subunit 2355 is connected to the third identification subunit 2351 and the second splicing subunit 2354, respectively, and is configured to calculate a third pixel horizontal distance between the seventh feature image and the second spliced feature image corresponding to each third feature level, and average the third pixel horizontal distances to obtain a third average pixel level, which is output as a second loss value.
In a preferred embodiment of the present invention, the sixth supervision unit 236 includes:
a fourth identifying subunit 2361, configured to respectively process, according to a fourth feature recognition model generated in advance, an eleventh feature image of the synthesized image at least one preset fourth feature level and a twelfth feature image of the first diffusion weighted image at the fourth feature level;
and the fourth calculating subunit 2362 is connected to the fourth identifying subunit 2361, and is configured to calculate and obtain a fourth pixel horizontal distance between the eleventh feature image and the twelfth feature image corresponding to each fourth feature level, and obtain a fourth average pixel level by averaging the fourth pixel horizontal distances, and output the fourth average pixel level as a third loss value.
A method for generating a high b-value diffusion-weighted image, such as any one of the above systems for generating a high b-value diffusion-weighted image, as shown in fig. 4, the method specifically includes:
step S1, the generating system acquires a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, the first diffusion weighted images and the second diffusion weighted images are in one-to-one correspondence, the first diffusion weighted images and the corresponding second diffusion weighted images are matched one-to-one to form image pairs, and the image pairs are added into a data set;
step S2, the generation system takes the first diffusion weighted image in the data set as input, takes the corresponding second diffusion weighted image as output, and obtains an image synthesis model by adopting pre-established supervision network training;
in step S3, the generating system inputs the diffusion-weighted image with the first diffusion sensitivity coefficient into the image synthesis model to obtain a synthesized diffusion-weighted image with the second diffusion sensitivity coefficient.
In a preferred embodiment of the present invention, as shown in FIG. 5, the two images of the first column on the left side have a b value of 1000s/mm2The two images of the second column on the left side of the ZOOMit-DWI image are respectively the b value of 2000s/mm obtained by processing the image synthesis model of the invention2ZOOMit-DWI image, two images in the third column on the left have true b values of 2000s/mm2The ZOOMit-DWI image has the advantages that the two images in the fourth row on the left side are the composite image of the two images in the second row on the left side after denoising and contrast enhancement, and the image quality of the composite image is higher than that of the real b value of 2000s/mm2The ZOOMit-DWI image can meet the clinical use requirement.
In another preferred embodiment of the present invention, as shown in FIG. 6, the two images of the first column on the left side have a b value of 1000s/mm2The two images of the second column on the left side of the ssEPI-DWI image are respectively the b value of 2000s/mm obtained by processing the image synthesis model of the invention2ZOOMit-DWI picture, leftThe two images in the third column on the side are the composite images of the two images in the second column on the left after denoising and contrast enhancement, and it can be seen that the image synthesis model of the invention is also suitable for the conventional EPI, and the b value is 1000s/mm2The DWI image of (2).
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A diffusion weighted image generation system is characterized by specifically comprising:
the image acquisition module is used for acquiring a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, wherein the first diffusion weighted images correspond to the second diffusion weighted images one by one;
the image acquisition module matches the first diffusion weighted image with the corresponding second diffusion weighted image one by one to form an image pair, and adds each image pair into a data set;
the second diffusion susceptibility is higher than the first diffusion susceptibility;
the model training module is connected with the image acquisition module and used for taking the first diffusion weighted image in the data set as input, taking the corresponding second diffusion weighted image as output and obtaining an image synthesis model by adopting a pre-established supervision network for training;
and the image generation module is connected with the model training module and used for inputting the diffusion weighted image with the first diffusion sensitivity coefficient into the image synthesis model and generating a synthesized diffusion weighted image with the second diffusion sensitivity coefficient.
2. The system for generating diffusion-weighted images as claimed in claim 1, further comprising an image post-processing module connected to the image generation module for performing edge enhancement and/or denoising on the generated diffusion-weighted image with the second diffusion sensitivity coefficient.
3. The system for generating diffusion-weighted images of claim 1, wherein the supervisory network has a first loss function, a second loss function, and a third loss function that are pre-constructed;
the model training module comprises:
the selecting submodule is used for selecting a plurality of image pairs from the data set to form a reference set, and extracting data from the data set to obtain a training set;
the processing submodule is used for constructing an image synthesis network according to preset hyper-parameters, performing iterative training on the image synthesis network according to the training set, and respectively inputting each first diffusion weighted image in the training set into the image synthesis network after each training is finished to obtain a synthesized image;
a supervision submodule, respectively connected to the selection submodule and the processing submodule, configured to input the synthesized image, each of the first diffusion weighted images in the training set, and the reference set into the supervision network for learning, and output a first loss value calculated according to the first loss function, a second loss value calculated according to the second loss function, and a third loss value calculated according to the third loss function;
and the updating sub-module is respectively connected with the processing sub-module and the monitoring sub-module and is used for adjusting the hyper-parameter according to the first loss value, the second loss value and the third loss value after each training is finished, and obtaining the image synthesis model after all training is finished.
4. The system for generating diffusion-weighted images according to claim 3, wherein the training set includes a plurality of image pairs extracted from the data set, the model training module trains the image synthesis model by means of full supervised learning, and the supervision networks include a first supervision network associated with the first loss function, a second supervision network associated with the second loss function, and a third supervision network associated with the third loss function;
the supervision submodule then comprises:
a first supervision unit, configured to, for each of the image pairs, input the composite image and the corresponding second diffusion weighted image into the first supervision network for counterlearning to obtain the first loss value;
a second supervision unit, configured to, for each of the image pairs, input the synthesized image, the corresponding first diffusion weighted image, and the reference set into the second supervision network for feature learning to obtain the second loss value;
and the third supervision unit is used for inputting the synthetic image and the corresponding second diffusion weighted image into the third supervision network for feature learning to obtain the third loss value aiming at each image pair.
5. The system for generating diffusion-weighted images according to claim 3, wherein the training set includes a plurality of first diffusion-weighted images extracted from the data set, the model training module trains the image synthesis model by semi-supervised learning, and the supervision networks include a first supervision network associated with the first loss function, a second supervision network associated with the first loss function, and a third supervision network associated with the first loss function;
the supervision submodule then comprises:
a fourth supervising unit, configured to, for each of the first diffusion weighted images, input the composite image and any one of the second diffusion weighted images in the reference set into the first supervising network for counterlearning to obtain the first loss value;
a fifth supervision unit, which inputs the synthesized image, the corresponding first diffusion weighted image and the reference set into the second supervision network for feature learning to obtain the second loss value;
and the sixth supervision unit is used for inputting the synthetic image and the corresponding first diffusion weighted image into the third supervision network to carry out feature learning so as to obtain the third loss value.
6. The diffusion weighted image generation system of claim 4, wherein the second supervision unit comprises:
a first identifying subunit, configured to respectively process, according to a first feature identification model generated in advance, to obtain a first feature image of the composite image at least one preset first feature level, a second feature image of the first diffusion weighted image at the first feature level, a third feature image of the first diffusion weighted image of each image pair in the reference set at the first feature level, and a fourth feature image of the second diffusion weighted image of each image pair at the first feature level;
the first block sub-unit is connected with the first identification sub-unit and is used for respectively carrying out image segmentation on each second characteristic image, each third characteristic image and each fourth characteristic image to obtain a plurality of characteristic blocks;
a first matching subunit, connected to the first blocking subunit, and configured to match, for each first feature level, each feature block of the second feature image with each feature block of each third feature image, and obtain a position coordinate of the feature block of each third feature image with a highest matching degree;
the first splicing subunit is respectively connected with the first blocking subunit and the first matching subunit, acquires each feature block of each fourth feature image corresponding to each third feature image according to the position coordinates, and performs image splicing on each feature block of each fourth feature image to obtain a first spliced feature image;
and the first calculating subunit is respectively connected to the first identifying subunit and the first splicing subunit, and is configured to respectively calculate a first pixel horizontal distance between the first feature image and the first spliced feature image corresponding to each first feature level, and average the first pixel horizontal distances to obtain a first average pixel level, which is output as the second loss value.
7. The diffusion weighted image generation system of claim 4, wherein the third supervision unit comprises:
the second identification subunit is used for respectively processing according to a second feature identification model generated in advance to obtain a fifth feature image of the composite image at a preset at least one second feature level and a sixth feature image of the second diffusion weighted image at the second feature level;
and the second calculating subunit is connected to the second identifying unit, and is configured to calculate and obtain a second horizontal pixel distance between the fifth feature image and the sixth feature image corresponding to each second feature level, and average the second horizontal pixel distances to obtain a second average pixel level, which is output as the third loss value.
8. The diffusion weighted image generation system of claim 5, wherein the fifth supervision unit comprises:
a third identifying subunit, configured to respectively process, according to a third feature identification model generated in advance, to obtain a seventh feature image of the composite image at least one preset third feature level, an eighth feature image of the first diffusion weighted image at the third feature level, a ninth feature image of the first diffusion weighted image at the third feature level of each image pair in the reference set, and a tenth feature image of the second diffusion weighted image of each image pair at the first feature level;
the second partitioning subunit is connected to the third identifying subunit and configured to perform image segmentation on each of the eighth feature image, the ninth feature image, and the tenth feature image to obtain a plurality of feature blocks;
a second matching subunit, connected to the second blocking subunit, and configured to match, for each third feature level, each feature block of the eighth feature image with each feature block of each ninth feature image, respectively, and obtain a position coordinate of the feature block of each ninth feature image with a highest matching degree;
the second splicing subunit is respectively connected with the second blocking subunit and the second matching subunit, acquires each feature block of each tenth feature image corresponding to each ninth feature image according to the position coordinates, and performs image splicing on each feature block of each tenth feature image to obtain a second spliced feature image;
and the third calculating subunit is respectively connected with the third identifying subunit and the second splicing subunit, and is configured to respectively calculate a third pixel horizontal distance between the seventh feature image and the second spliced feature image corresponding to each third feature level, and average the third pixel horizontal distances to obtain a third average pixel level, which is output as the second loss value.
9. The diffusion weighted image generation system of claim 5, wherein the sixth supervision unit comprises:
a fourth identification subunit, configured to respectively process, according to a fourth feature identification model generated in advance, to obtain an eleventh feature image of the composite image at least one preset fourth feature level and a twelfth feature image of the first diffusion weighted image at the fourth feature level;
and the fourth calculating subunit is connected to the fourth identifying subunit, and is configured to calculate and obtain a fourth pixel horizontal distance between the eleventh feature image and the twelfth feature image corresponding to each fourth feature level, and obtain a fourth average pixel level by averaging the fourth pixel horizontal distances, and output the fourth average pixel level as the third loss value.
10. A method for generating a diffusion-weighted image, according to the system for generating a diffusion-weighted image of any one of claims 1 to 9, the method comprising:
step S1, the generating system acquires a plurality of first diffusion weighted images with first diffusion sensitivity coefficients and a plurality of second diffusion weighted images with second diffusion sensitivity coefficients, the first diffusion weighted images and the second diffusion weighted images are in one-to-one correspondence, the first diffusion weighted images and the corresponding second diffusion weighted images are matched one-to-one to form image pairs, and each image pair is added into a data set;
step S2, the generating system takes the first diffusion weighted image in the data set as input, takes the corresponding second diffusion weighted image as output, and obtains an image synthesis model by adopting pre-established supervision network training;
step S3, the generating system inputs the diffusion-weighted image with the first diffusion sensitivity coefficient into the image synthesis model, and obtains a synthesized diffusion-weighted image with the second diffusion sensitivity coefficient.
CN202110136563.3A 2020-06-12 2021-02-01 Diffusion weighted image generation system and method Active CN112785540B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202021084354 2020-06-12
CN2020210843546 2020-06-12

Publications (2)

Publication Number Publication Date
CN112785540A true CN112785540A (en) 2021-05-11
CN112785540B CN112785540B (en) 2023-07-28

Family

ID=75760257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110136563.3A Active CN112785540B (en) 2020-06-12 2021-02-01 Diffusion weighted image generation system and method

Country Status (1)

Country Link
CN (1) CN112785540B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228606A (en) * 2023-05-09 2023-06-06 南京茂聚智能科技有限公司 Image optimization processing system based on big data
WO2023160409A1 (en) * 2022-02-25 2023-08-31 International Business Machines Corporation Automatic determination of b-values from diffusion-weighted magnetic resonance images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885246A (en) * 2015-12-09 2018-11-23 皇家飞利浦有限公司 For generating the diffusion MRI method of the synthesis diffusion image at high b value
US20200096592A1 (en) * 2018-09-25 2020-03-26 Siemens Healthineers Ltd. Magnetic resonance diffusion tensor imaging method and device, and fiber tracking method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885246A (en) * 2015-12-09 2018-11-23 皇家飞利浦有限公司 For generating the diffusion MRI method of the synthesis diffusion image at high b value
US20200096592A1 (en) * 2018-09-25 2020-03-26 Siemens Healthineers Ltd. Magnetic resonance diffusion tensor imaging method and device, and fiber tracking method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁建平,王霄英,李俊敏,蒋学祥,肖江喜: "前列腺和精囊磁共振扩散成像中EPI与SSFSE序列的比较", 临床放射学杂志 *
王新良;李玉欣;周晓琳;: "颈髓MR扩散加权成像优化b值初步研究", 放射学实践 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160409A1 (en) * 2022-02-25 2023-08-31 International Business Machines Corporation Automatic determination of b-values from diffusion-weighted magnetic resonance images
CN116228606A (en) * 2023-05-09 2023-06-06 南京茂聚智能科技有限公司 Image optimization processing system based on big data
CN116228606B (en) * 2023-05-09 2023-07-28 南京茂聚智能科技有限公司 Image optimization processing system based on big data

Also Published As

Publication number Publication date
CN112785540B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
Qu et al. Enhanced pix2pix dehazing network
CN105960657B (en) Use the facial super-resolution of convolutional neural networks
CN111047629B (en) Multi-modal image registration method and device, electronic equipment and storage medium
Fantini et al. Automatic detection of motion artifacts on MRI using Deep CNN
CN109978871B (en) Fiber bundle screening method integrating probability type and determination type fiber bundle tracking
CN104486618B (en) The noise detecting method and device of video image
US10970837B2 (en) Automated uncertainty estimation of lesion segmentation
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN112785540A (en) Generation system and method of diffusion weighted image
CN110827232B (en) Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)
CN109886944A (en) A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram
CN111080731A (en) Diffusion weighted imaging method and device
CN114332098A (en) Carotid artery unstable plaque segmentation method based on multi-sequence magnetic resonance image
Zeng et al. FOD-Net: A deep learning method for fiber orientation distribution angular super resolution
CN116563916A (en) Attention fusion-based cyclic face super-resolution method and system
US8315451B2 (en) Method for segmentation of an MRI image of a tissue in presence of partial volume effects and computer program implementing the method
CN112184845B (en) Method and device for generating diffusion weighted image reconstruction model
Roy et al. Synthesizing MR contrast and resolution through a patch matching technique
CN115861464A (en) Pseudo CT (computed tomography) synthesis method based on multimode MRI (magnetic resonance imaging) synchronous generation
CN109009216A (en) A kind of ultrasonic image naked eye 3D system
Chilukuri et al. Analysing Of Image Quality Computation Models Through Convolutional Neural Network
CN114494132A (en) Disease classification system based on deep learning and fiber bundle spatial statistical analysis
CN110580728B (en) CT-MR modal migration method based on structural feature self-enhancement
CA3104607A1 (en) Contrast-agent-free medical diagnostic imaging
CN113052840A (en) Processing method based on low signal-to-noise ratio PET image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant