CN111445553B - Depth learning-based intra-voxel incoherent motion imaging acceleration method and device - Google Patents

Depth learning-based intra-voxel incoherent motion imaging acceleration method and device Download PDF

Info

Publication number
CN111445553B
CN111445553B CN202010244432.2A CN202010244432A CN111445553B CN 111445553 B CN111445553 B CN 111445553B CN 202010244432 A CN202010244432 A CN 202010244432A CN 111445553 B CN111445553 B CN 111445553B
Authority
CN
China
Prior art keywords
deep learning
voxel
values
placenta
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010244432.2A
Other languages
Chinese (zh)
Other versions
CN111445553A (en
Inventor
吴丹
黄凡
颜国辉
邹煜
郑天舒
张祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lamo Medical Imaging Technology Co ltd
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010244432.2A priority Critical patent/CN111445553B/en
Publication of CN111445553A publication Critical patent/CN111445553A/en
Application granted granted Critical
Publication of CN111445553B publication Critical patent/CN111445553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4306Detecting, measuring or recording for evaluating the reproductive systems for evaluating the female reproductive systems, e.g. gynaecological evaluations
    • A61B5/4343Pregnancy and labour monitoring, e.g. for labour onset detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Reproductive Health (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Pregnancy & Childbirth (AREA)
  • Gynecology & Obstetrics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for accelerating incoherent motion imaging in a voxel in placenta based on deep learning. The method comprises the following steps: firstly, rigid body registration and affine transformation registration are carried out on the placenta data in an iterative multi-registration mode to obtain the registered placenta data information. And secondly, utilizing the registered placenta data, and using an expert to outline the region of interest to establish a voxel database in the placenta. And finally, training by using the obtained data of the voxels to obtain corresponding characteristic information. Thereby realizing the acceleration of incoherent motion imaging in the voxel in the placenta. The invention provides an acceleration scheme based on deep learning for the existing intra-placental voxel incoherent motion imaging method, can obtain image information with the quality similar to the same quality under the condition of less acquisition time, has higher accuracy and precision, and has better effect than other intra-placental voxel incoherent motion imaging methods.

Description

Deep learning-based intra-voxel incoherent motion imaging acceleration method and device
Technical Field
The application relates to the field of magnetic resonance image processing and the field of artificial intelligence, in particular to a method and a device for imaging incoherent motion in a voxel based on deep learning.
Background
Diffusion imaging is a magnetic resonance imaging method for nondestructively measuring the movement of water molecules in living tissues, the image contrast of the diffusion imaging is mainly related to the movement speed and the movement direction of the water molecules, and different image contrast is formed by the weights of T1, T2 and proton density as in the conventional MRI method, so that the diffusion imaging can provide the unavailable microstructure information of the conventional MRI method and plays an important role in the detection of central nervous diseases, such as the identification of good and malignant tumors, the evaluation of curative effect and the prediction.
Conventional diffusion imaging typically uses a single-shot spin echo EPI sequence, with diffusion-weighted weights represented by b values. The simplest diffusion imaging sequence can be constructed by acquiring a data (S) without diffusion weighting0) And a data (S) with a diffusion weight of bb) An Apparent Diffusion Coefficient (ADC) is calculated. ADC in mm2The unit of/s reflects the diffusion speed of water molecules, and the calculation formula is
Figure RE-GDA0002817222320000011
When two or more components exist in a voxel at the same time and the ADC of each component is different, the different components can be analyzed by collecting multi-b-value and multi-directional diffusion signals and designing a more complex microstructure model. In 1986, LeBihan et al proposed an in-voxel incoherent motion (IVIM) model, established a bi-exponential model for pseudo-diffusion motion of blood microcirculation in tissues, and simultaneously acquired the diffusion of water molecules in tissues (cells, axon dendrites and the like) and blood perfusion information in microcirculation (capillaries and small blood vessels). A common IVIM model can be expressed as
Figure RE-GDA0002817222320000012
Wherein f is a proportionality coefficient corresponding to blood microcirculation, D is a pseudo-diffusion coefficient corresponding to blood microcirculation, and D is a diffusion coefficient corresponding to water in tissues. The IVIM technique has been widely used to detect perfusion in tissues, such as abnormal perfusion in organs like brain, kidney, liver, placenta, etc. While IVIM imaging requires the acquisition of multiple b-value (about 10 b-values) of diffuse signals, the acquisition is long and susceptible to motion artifacts. Body organs are subject to large artifacts with breathing, and in particular, imaging of the placenta is affected not only by breathing but also by fetal movement. Thus, shortening the time of image formationAnd the quality of IVIM reconstruction is ensured, and the method has important significance for accelerating placenta imaging, reducing discomfort of pregnant women and promoting clinical application of IVIM technology.
Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. In recent years, deep learning methods have shown particular advantages in both magnetic resonance imaging and image processing, such as fast imaging techniques, image segmentation, assisted diagnosis, etc. In addition, the deep learning method has preliminary application in the diffusion magnetic resonance direction, such as reducing the diffusion direction by the Q-space learning method, acquiring data of b value but realizing the same fiber bundle reconstruction result, and the like.
The invention achieves better results by combining deep learning and IVIM models.
Disclosure of Invention
In order to overcome the defects of long acquisition time and large influence of motion in the prior art, the invention provides a method for accelerating incoherent motion imaging in a voxel based on deep learning.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
in a first aspect, the present invention provides a method for accelerating deep learning-based intra-voxel incoherent motion imaging, which comprises the following steps:
the method comprises the following steps: establishing an IVIM database of human placenta, wherein the database comprises 0-800s/mm2Placenta diffusion weighted image data of 10 b values in the range, wherein the data are collected from a plurality of pregnant women with pregnancy periods between 28 and 36 weeks;
step two: respectively carrying out 6-10 times of iterative registration operation on diffusion weighted image data of each human placenta in the database under different b values so as to remove the influence of motion artifacts among frames of images under different b values;
step three: dividing the region of interest of each placenta diffusion weighted image after registration in the step two;
step four: based on the diffusion weighted image data of each placenta processed in the third step under 10 b values, fitting the voxel data in the region of interest by using a double-index IVIM model as a fitting model and adopting a method of sectional estimation and integral fitting to obtain three characteristic parameters of a blood flow perfusion fraction f, a blood water molecule diffusion coefficient D and a tissue water molecule diffusion coefficient D of each voxel in the region of interest in the placenta image, and using the characteristic parameters as labels of deep learning network training data;
step five: constructing a deep learning network, wherein the input of the network is 3-5 b values and the normalized diffusion weighting signal S of the voxel under each input b value(b)/S0Outputting f, D and D of the voxels; training the deep learning network by using the diffusion weighted image data with the labels obtained after the processing in the step four as training data;
step six: and taking 3-5 b values of the target human placenta and diffusion weighted image data under the b values as the input of the deep learning network, and outputting three characteristic parameter estimation values of f, D and D of each voxel in the image by the deep learning network trained in the step five.
Based on the above-mentioned solution of the first aspect, the following preferred implementations can be further provided in each step.
Preferably, the establishing method of the human placenta IVIM database in the first step is as follows:
acquiring whole-uterus-covered IVIM imaging of a plurality of pregnant women with 28-36 weeks of gestation and normal placenta function by using a single-excitation diffusion-weighted EPI sequence in a magnetic resonance system; the IVIM data of each pregnant woman include b values of 10,20,50,80,100,150,200,300,500and 800s/mm respectively2Diffusion weighted image data of time.
Preferably, the image iterative registration method of step two is as follows:
averaging diffusion weighted images under all b values of diffusion weighted image data of each human placenta under 10 b values to obtain an average template; then, comparing and registering the diffusion weighted image under each b value with the average template through rigid body transformation and affine transformation; averaging all diffusion weighted images under the b values obtained after registration to obtain a new average template, and then performing comparison registration again; and iterating for 6-10 times to obtain the registration result of each diffusion weighted image.
Preferably, the fitting method in the fourth step is as follows:
the form of the bi-exponential IVIM model for intra-voxel incoherent motion imaging is as follows:
S(b)=S0[(1-f)e-bD+fe-bD*]
the form of the tissue water molecule diffusion single exponential model without taking the IVIM effect into account is as follows:
S(b)=S0e-bD
first, segment estimation was performed based on diffusion weighted image data of 10 b values per placenta after the processing of step three, using b-200-500 s/mm2Using a single exponential model to estimate the D value and the non-diffusion weighted signal corresponding to the water molecular component in the tissue
Figure RE-GDA0002817222320000031
Using b is 10-150s/mm2Using extrapolation method to estimate non-diffusion weighted signal corresponding to tissue and blood
Figure RE-GDA0002817222320000032
The f value is estimated as
Figure RE-GDA0002817222320000033
The value of D is estimated as D × 10;
then, the estimated [ f, D, D ] of each voxel in the region of interest]As an initial value, [ f/2, f × 2],[D/2,D×2],[D*/3,D*×3]As the upper and lower limits of f, D, respectively, the diffusion of each placenta after the treatment of step three under 10 b values plusPerforming integral fitting on the dual-exponential model by the weighted image data, wherein the fitting method is a nonlinear least square method, and further obtaining f, D in each voxel in the region of interest*And the fitting value is used as a label of the deep learning network training data.
Preferably, when the double-exponential model is subjected to overall fitting, f, D and D in the double-exponential model are subjected to fitting by a method of establishing a nonlinear over-determined equation system*To solve for.
Preferably, the deep learning network in the step five is constructed as follows:
firstly, designing a deep learning network framework consisting of an input layer, four hidden layers and an output layer; the four hidden layers are completely connected, and the number of the neurons is equal to the number of the input features; based on the deep learning framework, three independent deep learning networks are constructed, and each image voxel is used as an input sample; the input of each of the 3 networks is composed of 3-5 b-value vectors and normalized diffusion weighted signals S under each input b value(b)/S0The outputs of the 3 networks are blood perfusion fraction f, blood water molecule pseudo-dispersion coefficient D and tissue water molecule dispersion coefficient D.
Preferably, the 3-5 b values input into the deep learning network in the sixth step should be consistent with the b values used for training the deep learning network in the fifth step.
In a second aspect, the present invention provides an acceleration apparatus for deep learning-based intra-voxel incoherent motion imaging, which includes a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the acceleration method for deep learning based intra-voxel incoherent motion imaging according to any of the aspects of the first aspect.
In a third aspect, the present invention provides an acceleration apparatus for deep learning-based intra-voxel incoherent motion imaging, which includes a data acquisition device, a memory and a processor;
the data acquisition equipment is used for acquiring placenta diffusion weighted image data of the target placenta under 3-5 b values;
the memory is used for storing a computer program and an image acquired by the data acquisition equipment; the computer program comprises a deep learning network constructed and trained in the acceleration method based on deep learning intra-voxel incoherent motion imaging according to any aspect of the first aspect;
and the processor is used for taking 3-5 b values of the target human placenta and diffusion weighted image data under the b values as the input of the deep learning network when executing the computer program, and outputting three characteristic parameter estimated values of f, D and D of each voxel in the image by the deep learning network in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, implements the method for accelerating depth-learning based intra-voxel incoherent motion imaging according to any of the aspects of the first aspect.
Compared with the prior art, the invention has the following characteristics: the invention relates to a voxel incoherent motion imaging acceleration method based on deep learning, namely, the parameter fitting effect of multiple b values can be achieved only by a few b value data, and the method has the characteristic of short acquisition time in actual practice. Voxel-incoherent motion imaging has proven to be a useful tool for assessing placental microcirculation flow, and is of potential value for the diagnosis of placental dysfunction. However, conventional fitting of voxel incoherent motion imaging models usually requires acquiring a series of b values, resulting in long acquisition time experiments and large influence of motion artifacts when imaging body organs, placenta and the like. Meanwhile, because the tolerance of the pregnant woman is poor during scanning, long-time scanning is difficult to be integrated into a clinical routine scanning process. The method combines the traditional voxel incoherent motion imaging method and the deep learning method, can shorten the time required by acquisition without influencing the characteristic parameter reconstruction effect under the condition of reducing the b value, and has important significance in the aspect of voxel incoherent motion imaging in placenta.
Drawings
Fig. 1 is a flow chart of an acceleration method for deep learning based voxel incoherent motion imaging.
Fig. 2 is a scatter plot comparing the feature parameters obtained by the deep learning-based method with the gold standard in all voxels.
Fig. 3 is a comparison of f, D maps obtained by the deep learning method under 3 b values and 5 b values with the characteristic parameter map of the gold standard.
Fig. 4 is a box plot of f, D values obtained at 3 b values and 5 b values based on the deep learning method compared to the gold standard.
Detailed Description
Fig. 1 is a flow chart of the present invention. The following method based on the present invention is combined with the following embodiments to show the specific technical effects thereof, so as to enable those skilled in the art to better understand the essence of the present invention.
In a preferred implementation mode of the invention, the acceleration method of the intra-placenta voxel incoherent motion imaging based on deep learning comprises the following steps:
the method comprises the following steps: establishing an IVIM database of the human placenta, wherein the database of voxels in the human placenta comprises a plurality of pregnant woman placenta IVIM data with normal placenta function in the gestational period of 28-36 weeks. Placenta IVIM is collected on 1.5T GE magnetic resonance system, and the same pregnant woman uses single-shot diffusion weighted EPI sequence to collect 0-800s/mm2Full uterine coverage IVIM imaging at 10 b values within the range. In the invention, each pregnant woman needs to collect 10 b values of 10,20,50,80,100,150,200,300,500and 800s/mm2For use as subsequent model training data.
Step two: because the adopted voxel incoherent imaging needs more b values to be acquired, the image acquisition time is longer, and the intrauterine imaging is easy to be subjected to the respiratory motion of the abdomen of the pregnant woman and the irregular motion image of the fetus. In order to ensure the consistency of all the different b-value image spaces and make the calculation of each voxel parameter more accurate later, it is necessary to perform self-registration on each placenta image data in the established IVIM database of the human placenta to remove the displacement between the diffusion weighted images of the respective b-values caused by maternal respiration and fetal movement. In the registration process, the operation is performed 6-10 times in an iterative registration mode. The specific iterative registration method is as follows:
for diffusion weighted image data under 10 b values corresponding to each pregnant woman, averaging diffusion weighted images under all b values to obtain an average template; then, the diffusion weighted image at each b value is compared and registered with the average template through rigid body transformation and affine transformation, thereby being used as the first registration. Then, averaging all diffusion weighted images under the b values obtained after registration to obtain a new average template, and then comparing and registering the diffusion weighted images under each b value with the average template through rigid body transformation and affine transformation again to serve as secondary registration. And iterating for 6-10 times to finally obtain the registration result of each diffusion weighted image. In the registration process, the purpose of using averaging is to ensure that the amount of deformation of each b-value image is minimal. Through the iterative registration operation, the motion artifact influence (including breathing, fetal motion and the like) generated by a multi-b-value diffusion signal in a long-time acquisition process is solved, so that tissue structures in different multi-b-value images are consistent in space.
Step three: since the IVIM data of the human placenta also contains images of other tissues except for the placenta, after the registration of the images is completed in step two, the diffusion weighted images after the registration in the database need to be divided into regions of interest, where the regions of interest are the placenta regions in the images. This allows the analysis of each voxel in the placenta to be performed in subsequent steps.
Step four: since the IVIM data of the human placenta needs to be labeled with truth value in advance when used for deep learning network training, the label paper needs to be obtained by using the currently recognized accurate calculation method. In the invention, based on diffusion weighted image data of each placenta processed in the third step under 10 b values, a double-index IVIM model is used as a fitting model, a method of segmented estimation and integral fitting is adopted to fit voxel data in an interested region, and blood flow perfusion fraction f, blood water molecule pseudo-diffusion coefficient D, tissue water molecule diffusion coefficient D and three characteristic parameters of each voxel in the interested region in a placenta image are obtained.
The dual index model for intra-voxel incoherent motion imaging is shown:
S(b)=S0[(1-f)e-bD+fe-bD*]
wherein f is the perfusion fraction of the blood flow, D is the tissue water molecule diffusion coefficient, D*Is the pseudo-dispersion coefficient of water molecules in blood.
At high b-values, only the diffusion effect of tissue water molecules can be considered, using a single exponential model, of the form:
S(b)=S0e-bD
the fitting process is specifically divided into the following two sub-steps:
step 4.1: first, segment estimation is carried out, and high b value (b is 200-500 s/mm)2) Using the above-mentioned single exponential model to estimate the D value and the non-diffusion weighted signal corresponding to the water molecular component in the tissue
Figure RE-GDA0002817222320000071
Using low b values (b ═ 10-150 s/mm)2) By extrapolation methods to estimate the non-diffuse weighted signal corresponding to tissue plus blood
Figure RE-GDA0002817222320000072
The f value can be estimated as
Figure RE-GDA0002817222320000073
The value of D may be estimated as D × 10.
Step 4.2: then, the [ f, D ] of each voxel in the region of interest obtained in step 4.1 is used]As an initial value, [ f/2, f × 2],[D/2,D×2],[D*/3,D*×3]Respectively as the upper and lower limits of f, D and D, and obtaining f, D and D in each voxel in the region of interest by a nonlinear least square fitting method based on a double-exponential model*To be used as a toyAnd (4) integrating the values. When the double-exponential model is subjected to overall fitting, f, D and D in the double-exponential model can be subjected to fitting by establishing a nonlinear overdetermined equation set*To solve for. The data obtained after fitting can be used as a gold standard, so that the fitting value can be used as a label of subsequent deep learning network training data. When fitting is performed in this step, the fitting data is diffusion weighted image data at 10 b values, and since fitting is performed for a single voxel at the time of fitting, the independent variable and the dependent variable to be fitted are also data in the corresponding single voxel, respectively.
Step five: after the labels of the training data are obtained, a deep learning method can be adopted, and the training is carried out through the fitted data to obtain the characteristic parameters. Firstly, designing a framework of a 4-layer structured perceptron deep learning network, wherein the framework consists of an input layer, four hidden layers and an output layer. The four hidden layers are fully connected and the number of neurons equals the number of features of the input. In this deep learning framework, each image voxel serves as an input sample. Based on the deep learning framework, three independent deep learning networks are constructed, and the input of 3 networks is 3-5 b value vectors and normalized diffusion weighted signals S under all input b values(b)/S0The outputs of the 3 networks are blood perfusion fraction f, blood water molecule pseudo-dispersion coefficient D and tissue water molecule dispersion coefficient D.
The sub-step implementation of this step is described in detail below:
step 5.1: and dividing the diffusion weighted image data with the labels obtained after the processing in the fourth step into a training set and a test set according to a proportion, wherein the training set is used for training the deep learning network, and the test set is used for testing the performance of the network. Since the value of D is 10-3Order of magnitude, D is 10-2Of order of magnitude and f is 10-1Order of magnitude, therefore, D needs to be amplified by 100 times, D*Magnification is 10 times. The learning rate is set as: 0.001.
step 5.2: the data of the model weights are initialized randomly. The model training process is a process of fitting input data and output data, and is essentially a process of searching a data distribution rule.
Step 5.3: training data and labels, taking out 256 data according to the spatial sequence of voxel arrangement each time and putting the 256 data into a model, wherein the model adopts a 4-layer full-connection layer model, and the full-connection model structure is as follows: [5,5] + linear rectification function, [5,3 ]. Where [ m, n ] is actually an m × n matrix, and the linear rectification function refers to a slope function in mathematics, and can be expressed as:
f(x)=max(0,x)
in the training process, the root mean square error of the predicted value and the actual label value of the model is calculated in a circulating mode to serve as error loss, the gradient is calculated by adopting a back propagation algorithm, the model parameters are updated by using a random gradient descent algorithm, and the updating of the model parameters is stopped until the loss of the model prediction and the loss of the model reach a smaller value.
Step 5.4: step 5.3 is repeated for all voxels in the training set.
Step 5.5: step 5.4 is repeated 5000 times, if the loss is found to be stable and lingering within a relatively small range of values, indicating network convergence, the network is saved for later use in forecasting loads.
Step 5.6: and loading the stored network for testing, inputting test set data to obtain a predicted value, and comparing the predicted value with the label of the test set to judge the performance of the network. When the network performance meets the requirement, the actual [ f, D ] prediction can be performed, and the actual application steps can be referred to as step six.
Step six: and taking 3-5 b values of the target human placenta and diffusion weighted image data under the b values as the input of the deep learning network, and outputting three characteristic parameter estimation values of f, D and D of each voxel in the image by the deep learning network trained in the step five. It should be noted that the 3-5 b values input into the deep learning network should be consistent with the b values used for training the deep learning network in step five.
Therefore, three characteristic parameters of f, D and D of each voxel in the placenta image can be obtained only by adopting 3-5 b values, namely the IVIM data, so that the sampling times required for obtaining the multiple b value IVIM data of the pregnant woman can be greatly reduced, and the time required for collecting is shortened without influencing the characteristic parameter reconstruction effect.
The following shows the technical effects of the above method based on steps one to five in combination with the examples, so that those skilled in the art can better understand the essence of the present invention.
Examples
The acceleration method based on the deep learning voxel incoherent motion imaging is tested in 9 tested data of pregnancy in 28-36 weeks. Magnetic resonance scanning was performed in a universal electrical SIGNA HDXT 1.5T scanner, using a diffusion-weighted planar echo imaging sequence to acquire placental images from the maternal sagittal position: echo Time (TE)/repetition Time (TR) 76/3000ms, field of view (FOV) 320 × 320mm, in-plane resolution 1.25 × 1.25mm, layer thickness 4mm, 15 layers total, and a diffusion-weighted gradient is applied to three directional gradient values of 0,10,20,50,80,100,150,200,300,500, and 800s/mm, respectively2. In this embodiment, the iterative registration operation of step two is repeated 10 times.
Meanwhile, in order to show the technical effects of the method in a contrast mode, the implementation compares the results obtained by the deep learning method in the test set with the gold standard obtained by fitting. The comparison results are shown in the attached figures 2-4:
the scatter plot of fig. 2 shows the correlation between IVIM feature parameters from deep learning of 3 b-values (first row) and 5 values (second row), respectively, and the gold standard in all voxels of the test set. The figure shows a good positive correlation between the two, and the deep learning of 5 values (second row) yields better feature parameter accuracy than the 3 b values.
Figure 3 shows representative results of f, D-characteristic parameter plots from 3 b values (first row) and 5 values (second row) deep learning in an example of placental data. It can be seen that the f, D plots are substantially similar to the characteristic parameter plot of the gold standard (third row).
Fig. 4 shows a box plot of IVIM feature parameters from deep learning of 3 b-values and 5 values in all voxels of the test set. Compared with the gold standard, the feature parameter average obtained by learning by using 3 b values or 5 values is close to the gold standard.
In addition, the experiment also calculates the IVIM characteristic parameters obtained by fitting when 3 b values are calculated by the traditional piecewise fitting method. Table 1 specifically lists results of IVIM feature parameter comparisons obtained using 3 or 5 b values in the deep learning and piecewise fitting methods. As can be seen from table 1, the method proposed by the present invention has the lowest error.
Compared with the traditional method, the IVIM acceleration method provided by the invention has better effect under the condition of the same number of gradient directions, and has the advantages of short acquisition time and similar effect under the condition of more b values in the traditional method.
TABLE 1 comparison of the three algorithms
Figure RE-GDA0002817222320000101
Additionally, in other embodiments, an acceleration apparatus for deep learning based intra-placental voxel incoherent motion imaging may also be provided, comprising a memory and a processor;
the memory for storing a computer program;
said processor, when executing said computer program, is adapted to implement said method for accelerating said deep learning based intra-placental voxel incoherent motion imaging.
It should be noted that the Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. Of course, the device should also have the necessary components to implement the program operation, such as power supply, communication bus, etc.
Additionally, in other embodiments, a set of acceleration apparatus for deep learning based intra-placental voxel incoherent motion imaging may also be provided, comprising a data acquisition device, a memory, and a processor;
the data acquisition equipment is used for acquiring placenta diffusion weighted image data of the target placenta under 3-5 b values, and can be specifically realized by adopting a magnetic resonance imaging system.
The memory is used for storing a computer program and an image acquired by the data acquisition equipment; the computer program comprises the five steps of constructing and training the deep learning network;
and the processor is used for taking 3-5 b values of the target human placenta in the memory and diffusion weighted image data under the b values as the input of the deep learning network when executing the computer program, and outputting three characteristic parameter estimated values of f, D and D of each voxel in the image by the deep learning network in the memory.
In addition, in the set of accelerating devices, the memory and the processor can be further integrated into the data processing device of the magnetic resonance imaging system, and after the magnetic resonance imaging system acquires the corresponding data of the diagnostic object, the data can be stored in the memory and then the internal program can be called by the processor for processing, and the result can be directly output.
In addition, in other embodiments, a computer readable storage medium may also be provided, having stored thereon a computer program which, when being executed by a processor, implements the aforementioned acceleration method for deep learning-based intra-placental voxel incoherent motion imaging.
It should be noted that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.

Claims (9)

1. A method for accelerating deep learning-based intra-voxel incoherent motion imaging is characterized by comprising the following steps:
the method comprises the following steps: establishing an intra-voxel incoherent motion (IVIM) imaging database of the human placenta, the database comprising 0-800s/mm2Placenta diffusion weighted image data of 10 b values in the range, the data are collected from pregnant women with gestational period between 28-36 weeks;
step two: respectively carrying out 6-10 times of iterative registration operation on diffusion weighted image data of each human placenta in the database under different b values so as to remove the influence of motion artifacts among frames of images under different b values;
step three: dividing the region of interest of the diffusion weighted image of each placenta after registration in the step two;
step four: based on the diffusion weighted image data of each placenta processed in the third step under 10 b values, fitting the voxel data in the region of interest by using a double-index IVIM model as a fitting model and adopting a method of sectional estimation and integral fitting to obtain three characteristic parameters of blood flow perfusion fraction f, pseudo-dispersion coefficient D of blood water molecules and tissue water molecule dispersion coefficient D of each voxel in the region of interest in the placenta image, and taking the characteristic parameters as labels of deep learning network training data;
step five: constructing a deep learning network, wherein the input of the network is 3-5 b values and the normalized diffusion weighting signal S of the voxel under each input b value(b)/S0Outputting f, D and D of the voxels; training the deep learning network by using the diffusion weighted image data with the labels obtained after the processing in the step four as training data;
step six: taking 3-5 b values of the target human placenta and diffusion weighted image data under each b value as the input of the deep learning network, and outputting three characteristic parameter estimation values of f, D and D of each voxel in the image by the deep learning network trained in the step five;
the fitting method in the fourth step is as follows:
the form of the bi-exponential IVIM model for intra-voxel incoherent motion imaging is as follows:
Figure FDA0002817222310000011
the form of the tissue water molecule diffusion single exponential model without taking the IVIM effect into account is as follows:
S(b)=S0e-bD
firstly, based on diffusion weighted image data of each placenta processed in the step three under 10 b values, segmentation estimation is carried out, and the corresponding b value is 200-500s/mm2High b value data between, adopting single exponential model to estimate D value and non-diffusion weighting signal corresponding to water molecule component of tissue
Figure FDA0002817222310000012
Using a value of b corresponding to 10-150s/mm2Estimating non-diffusion weighted signal corresponding to tissue and blood by extrapolation method
Figure FDA0002817222310000021
The f value is estimated as
Figure FDA0002817222310000022
The value of D is estimated as D × 10;
then, the estimated [ f, D, D ] of each voxel in the region of interest]As an initial value, [ f/2, f × 2],[D/2,D×2],[D*/3,D*×3]Respectively serving as upper and lower limits of f, D and D, and integrally fitting the double-exponential model by using diffusion weighted image data of each placenta processed in the step three under 10 b values to further obtain f, D and D in each voxel in the region of interest*The fitting value is used as a label of deep learning network training data;
said S0For a data without diffusion weighting, SbIs a data with a diffusion weight of b, and b is a diffusion weight.
2. The method for accelerating deep learning based intra-voxel incoherent motion imaging according to claim 1, wherein the human placenta IVIM database of the first step is established as follows:
acquiring whole-uterus-covered IVIM imaging of a plurality of pregnant women with 28-36 weeks of gestation and normal placenta function by using a single-excitation diffusion-weighted EPI sequence in a magnetic resonance system; the IVIM data of each pregnant woman include b values of 10,20,50,80,100,150,200,300,500and 800s/mm respectively2Diffusion weighted image data of time.
3. The method for accelerating depth-learning-based intra-voxel incoherent motion imaging according to claim 1, wherein the image iterative registration method of the second step is as follows:
averaging diffusion weighted images under all b values of diffusion weighted image data of each human placenta under 10 b values to obtain an average template; then, comparing and registering the diffusion weighted image under each b value with the average template through rigid body transformation and affine transformation; averaging all diffusion weighted images under the b values obtained after registration to obtain a new average template, and then performing comparison registration again; and iterating for 6-10 times to obtain the registration result of each diffusion weighted image.
4. The acceleration method of incoherent motion imaging in voxel based on deep learning of claim 1, characterized by that, when the two-index model is integrally fitted, a nonlinear least square method is adopted.
5. The method for accelerating deep learning-based intra-voxel incoherent motion imaging according to claim 1, wherein the deep learning network of the fifth step is constructed as follows:
firstly, designing a deep learning network framework consisting of an input layer, four hidden layers and an output layer; the four hidden layers are completely connected, and the number of the neurons is equal to the number of the input features; based on the deep learning framework, three independent deep learning networks are constructed, and each image voxel is used as an input sample; the input of each of the 3 networks is composed of 3-5 b-value vectors and normalized diffusion weighted signals S under each input b value(b)/S0The outputs of the 3 networks are blood perfusion fraction f, blood water molecule pseudo-dispersion coefficient D and tissue water molecule dispersion coefficient D.
6. The method for accelerating incoherent motion imaging in voxel based on deep learning of claim 1, wherein the 3-5 b values input into the deep learning network in step six should be consistent with the b values used for training the deep learning network in step five.
7. An acceleration apparatus for deep learning based intra-voxel incoherent motion imaging, comprising a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the method for accelerating depth-learning based intra-voxel incoherent motion imaging according to any one of claims 1 to 6.
8. An accelerating device for deep learning-based intra-voxel incoherent motion imaging is characterized by comprising a data acquisition device, a memory and a processor;
the data acquisition equipment is used for acquiring placenta diffusion weighted image data of the target placenta under 3-5 b values;
the memory is used for storing a computer program and an image acquired by the data acquisition equipment; the computer program comprises a deep learning network which is constructed and trained in the acceleration method based on the deep learning intra-voxel incoherent motion imaging according to any one of claims 1 to 6;
and the processor is used for taking 3-5 b values of the target human placenta and diffusion weighted image data under the b values as the input of the deep learning network when executing the computer program, and outputting three characteristic parameter estimated values of f, D and D of each voxel in the image by the deep learning network in the memory.
9. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out a method for accelerating depth-learning based intra-voxel incoherent motion imaging according to any one of claims 1 to 6.
CN202010244432.2A 2020-03-31 2020-03-31 Depth learning-based intra-voxel incoherent motion imaging acceleration method and device Active CN111445553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244432.2A CN111445553B (en) 2020-03-31 2020-03-31 Depth learning-based intra-voxel incoherent motion imaging acceleration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244432.2A CN111445553B (en) 2020-03-31 2020-03-31 Depth learning-based intra-voxel incoherent motion imaging acceleration method and device

Publications (2)

Publication Number Publication Date
CN111445553A CN111445553A (en) 2020-07-24
CN111445553B true CN111445553B (en) 2021-02-09

Family

ID=71652596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244432.2A Active CN111445553B (en) 2020-03-31 2020-03-31 Depth learning-based intra-voxel incoherent motion imaging acceleration method and device

Country Status (1)

Country Link
CN (1) CN111445553B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838105B (en) * 2021-09-22 2024-02-13 浙江大学 Diffusion microcirculation model driving parameter estimation method, device and medium based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104323777B (en) * 2014-10-30 2016-06-29 西安交通大学医学院第一附属医院 A kind of removing method of diffusion magnetic resonance imaging moving artifact
US10302723B2 (en) * 2014-11-14 2019-05-28 Foundation For Research And Technology —Hellas (Forth) Apparatuses, methods and systems for estimating water diffusivity and microcirculation of blood using DW-MRI data
CN107240125B (en) * 2016-03-28 2020-02-07 上海联影医疗科技有限公司 Diffusion weighted imaging method
CN110276762A (en) * 2018-03-15 2019-09-24 北京大学 A kind of full-automatic bearing calibration of respiratory movement of the diffusion-weighted Abdominal MRI imaging of more b values
CN109730677B (en) * 2019-01-09 2023-03-21 王毅翔 Signal processing method and device for intra-voxel incoherent motion imaging and storage medium
CN110889897B (en) * 2019-11-21 2021-04-06 厦门大学 Method and system for reconstructing incoherent motion magnetic resonance imaging parameters in voxel

Also Published As

Publication number Publication date
CN111445553A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
Alansary et al. Fast fully automatic segmentation of the human placenta from motion corrupted MRI
Siauve et al. Assessment of human placental perfusion by intravoxel incoherent motion MR imaging
US9311702B2 (en) System and method for estimating a quantity of interest of a dynamic artery/tissue/vein system
KR20160058812A (en) Image analysis techniques for diagnosing diseases
CN110969614B (en) Brain age prediction method and system based on three-dimensional convolutional neural network
CN111415361B (en) Method and device for estimating brain age of fetus and detecting abnormality based on deep learning
Kim et al. Automatic myocardial segmentation in dynamic contrast enhanced perfusion MRI using Monte Carlo dropout in an encoder-decoder convolutional neural network
CN111753833A (en) Parkinson auxiliary identification method for building brain network modeling based on fMRI and DTI
CN110619635A (en) Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
Hesse et al. Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning
CN111445553B (en) Depth learning-based intra-voxel incoherent motion imaging acceleration method and device
KR20140028534A (en) System and method for assessing brain dyfunction using functional magnetic resonance imaging
CN114533121A (en) Brain perfusion state prediction device, method and equipment and model training device
Wang et al. Automatic evaluation of endometrial receptivity in three-dimensional transvaginal ultrasound images based on 3D U-Net segmentation
Cromb et al. Assessing within‐subject rates of change of placental MRI diffusion metrics in normal pregnancy
CN112992353A (en) Method and device for accurately predicting due date, computer equipment and storage medium
CN109118526B (en) Senile dementia image analysis system and analysis method based on virtual reality
Kulseng et al. Automatic placental and fetal volume estimation by a convolutional neural network
WO2021196866A1 (en) Method and apparatus for measuring placental blood flow by using flow-compensated and non-flow compensated diffusion magnetic resonance
CN113838105B (en) Diffusion microcirculation model driving parameter estimation method, device and medium based on deep learning
CN109741439A (en) A kind of three-dimensional rebuilding method of two dimension MRI fetus image
CN108720870A (en) A kind of fatty liver detecting system based on ultrasonic attenuation coefficient
Zhang et al. Graph-based whole body segmentation in fetal MR images
CN112508872A (en) Intracranial blood vessel image preprocessing method and electronic equipment
Easley et al. Inter-observer variability of vaginal wall segmentation from MRI: A statistical shape analysis approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230801

Address after: 409, 411, National University Science Park Headquarters Complex Building, No. 669 High speed Railway, the Taihu Lake Street, Changxing County, Huzhou City, Zhejiang Province, 313100

Patentee after: Zhejiang Lamo Medical Imaging Technology Co.,Ltd.

Address before: 310058 Yuhang Tang Road, Xihu District, Hangzhou, Zhejiang 866

Patentee before: ZHEJIANG University