CN118115608A - Quick diffusion tensor imaging method and device - Google Patents

Quick diffusion tensor imaging method and device Download PDF

Info

Publication number
CN118115608A
CN118115608A CN202211485863.3A CN202211485863A CN118115608A CN 118115608 A CN118115608 A CN 118115608A CN 202211485863 A CN202211485863 A CN 202211485863A CN 118115608 A CN118115608 A CN 118115608A
Authority
CN
China
Prior art keywords
diffusion
image
model
weighted image
parameter map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211485863.3A
Other languages
Chinese (zh)
Inventor
朱燕杰
徐溪
梁栋
郑海荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202211485863.3A priority Critical patent/CN118115608A/en
Priority to PCT/CN2023/133052 priority patent/WO2024109757A1/en
Publication of CN118115608A publication Critical patent/CN118115608A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a rapid diffusion tensor imaging method and device. The method comprises the following steps: acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image of a first quantity of diffusion gradient coding directions; generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image; determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion-weighted image and the second diffusion-weighted image; and determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image. According to the technical scheme, the generation and tensor fitting of images in other directions can be performed based on a small amount of diffusion weighted images, the times of repeatedly collecting the images are reduced, the image collection time is saved, and a high-quality quantitative parameter map is output.

Description

Quick diffusion tensor imaging method and device
Technical Field
The invention relates to the technical field of diffusion tensor imaging, in particular to a rapid diffusion tensor imaging method and device.
Background
The cardiac diffusion tensor imaging technique images based on the anisotropy of water molecule diffusion, which characterizes the cardiac diffusion tensor imaging for diffusion tensor analysis using conventional fitting algorithms.
In the related technology, in diffusion tensor analysis, generally, 10 to 16 diffusion weighted images in diffusion gradient coding directions are required to be acquired to obtain an overdetermined equation set for solving, and in order to improve the signal-to-noise ratio of the images, 8 to 12 times of repeated acquisition are required. Therefore, the scan time for cardiac diffusion tensor imaging is long.
Disclosure of Invention
The invention provides a rapid diffusion tensor imaging method and device, which are used for solving the technical problem of long scanning time of diffusion tensor imaging.
According to an aspect of the present invention, there is provided a fast diffusion tensor imaging method, the method comprising:
Acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image of a first quantity of diffusion gradient coding directions;
Generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image;
Determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion-weighted image and the second diffusion-weighted image;
And determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image.
According to another aspect of the present invention, there is provided a fast diffusion tensor imaging apparatus, the retrofit apparatus comprising:
The diffusion image acquisition module is used for acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image in a first number of diffusion gradient coding directions;
A diffusion image generation module for generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image;
The preliminary quantitative parameter map determining module is used for determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image;
And the target quantitative parameter map determining module is used for determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the fast diffusion tensor imaging method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement a fast diffusion tensor imaging method according to any one of the embodiments of the present invention when executed.
According to the technical scheme, only a small number of diffusion weighted images are required to be acquired by acquiring the reference images without diffusion weighting of the acquisition object and the first diffusion weighted images in the diffusion gradient coding direction, so that the image acquisition time is saved; generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image; determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image, and generating images in other directions by utilizing correlations among weighted images in different directions and solving the quantitative parameter map; and determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image so as to improve the accuracy of the quantitative parameter map. The method solves the technical problem that the acquisition time of diffusion tensor imaging is too long, can generate images in other directions and perform tensor fitting based on a small amount of diffusion weighted images, and has the advantages of reducing the repeated acquisition times of the images, saving the image acquisition time and outputting a high-quality target quantitative parameter map.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a fast diffusion tensor imaging method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a fast diffusion tensor imaging method according to an embodiment of the present invention;
FIG. 3 is an example schematic diagram of a model structure of a first model employed to perform a fast diffusion tensor imaging method according to an embodiment of the present invention;
FIG. 4 is an example schematic diagram of a model structure of an overall model employed to perform a fast diffusion tensor imaging method according to an embodiment of the present invention;
FIG. 5 is a graph showing the comparative effect of quantitative parameter graphs obtained by using a conventional method and a fast diffusion tensor imaging method according to an embodiment of the present invention;
FIG. 6 is a graph of the contrast effect of a diffusion-weighted image generated in a first model and an actually acquired diffusion-weighted image in a fast diffusion tensor imaging method employing an embodiment of the present invention;
Fig. 7 is a schematic structural diagram of a fast diffusion tensor imaging device according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device implementing the XXX method of an embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a description will be given of a related computational model of cardiac diffusion tensor imaging. The diffusion tensor imaging fitting model is also called a diffusion tensor estimation model. In the embodiment of the invention, the diffusion tensor imaging fitting model can be specifically expressed as the following formula:
Wherein S 0 is a reference image without diffusion weighting, b is a diffusion weighting factor, S is a diffusion weighted image acquired by applying different diffusion gradient coding directions g= (g x,gy,gz)T), and the diffusion tensor D represents a positive definite symmetric matrix containing 6 unknowns.
Specifically, the solution of the diffusion tensor can be expressed as:
wherein the tensor matrix D mn, m, n=x, y, z is the tensor element of the tensor matrix. /(I)Is the transposed matrix of the diffusion gradient encoding direction,K=1..k, K is the number of diffusion gradient encoding directions,/>Referred to as the B matrix. Thus, the diffusion tensor calculation model can be expressed specifically as the following formula:
Eigenvalue decomposition is performed on the dispersion tensor D to obtain eigenvalue (lambda 123) and eigenvector (V 1,V2,V3), and the average diffusivity (Mean diffusivity, MD) and fractional anisotropy (Fractional anisotropy, FA) of the eigenvalue are calculated:
The Helix Angle (HA) is defined as the angle between the projection of the first eigenvector V 1 corresponding to the maximum eigenvalue on the wall tangent plane and the minor axis plane.
Fig. 1 is a flowchart of a fast diffusion tensor imaging method according to an embodiment of the present invention. The present embodiment is applicable to the case of reducing the acquisition direction of a diffusion tensor image in q-space, and the method may be performed by a fast diffusion tensor imaging apparatus, which may be implemented in the form of hardware and/or software, and which may be configured in a terminal or a server. As shown in fig. 1, the method may specifically include:
S110, acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image of a first quantity of diffusion gradient coding directions.
Wherein an acquisition object may be understood as an object for acquiring a diffusion weighted image. Illustratively, the acquisition object may include a heart or brain, or the like. Diffusion weighted imaging is an imaging method based on the flow effect of magnetic resonance imaging, and can reflect the diffusion speed of water molecules.
The first number is understood to be the number of acquisitions of the diffusion gradient encoding direction. It will be appreciated that the first number may be set according to actual requirements, and is not specifically limited herein. From the acquisition time point of view, the smaller the first number, the better; the greater the first number, the more advantageous it is from the point of view of the accuracy of the quantitative parameter map. In embodiments of the present invention, the first number may typically be six or more. In other words, the number of diffusion gradient encoding directions may be six or more.
S120 generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image.
In embodiments of the present invention, a second diffusion weighted image of the other diffusion gradient encoding direction may be generated based on the first diffusion weighted image of the first number of diffusion gradient encoding directions and the reference image. By adopting the technical scheme, fewer diffusion weighted images can be acquired, and the image acquisition time is shortened.
Specifically, a diffusion tensor may be calculated based on the reference image and the first diffusion weighted image, and then a diffusion weighted image of a second number of other diffusion gradient encoding directions except the first number of diffusion gradient encoding directions is obtained by fitting according to a diffusion tensor, the reference image and a diffusion tensor imaging fitting model. Further, the diffusion weighted image in the other diffusion gradient encoding direction obtained by fitting may be used as the second diffusion weighted image, or the diffusion weighted image in the other diffusion gradient encoding direction obtained by fitting may be subjected to correction processing to obtain the second diffusion weighted image.
The method for correcting the diffusion weighted image of the other diffusion gradient encoding direction obtained by fitting may be various, for example, correction of the diffusion weighted image of the other diffusion gradient encoding direction obtained by fitting based on a preset image processing algorithm, correction of the diffusion weighted image of the other diffusion gradient encoding direction obtained by fitting based on a neural network trained in advance, and the like.
S130, determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image.
In the embodiment of the invention, the first diffusion weighted image and the second diffusion weighted image can be fitted based on a traditional algorithm to obtain a preliminary quantitative parameter map of the acquisition object.
For example, conventional algorithms may include least squares or weighted least squares, or the like. The dispersion tensor can be obtained by minimizing the square sum loss function, and then all quantization indexes are determined based on the dispersion tensor, so that a preliminary quantitative parameter diagram of the acquisition object is obtained.
The preliminary quantitative parameter map comprises a preliminary quantitative index of a diffusion tensor corresponding to the first diffusion weighted image. It should be noted that the quantization index specifically adopted may be determined according to the acquisition object. If the acquisition object comprises a heart, the quantitative indicator may include, but is not limited to, at least one of average diffusivity, anisotropy fraction, and pitch angle. If the acquisition subject includes a brain, the quantitative indicator may include, but is not limited to, at least one of an average diffusivity, an anisotropy fraction, an axial diffusivity (Axial diffusivity, AD), and a radial diffusivity (Radial diffusivity, RD).
And S140, determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image.
The target quantitative parameter map may be understood as a quantitative parameter map that is output to a user for reference. In view of the fact that the obtained preliminary quantitative parameter map of the acquisition object may not be accurate enough, a final quantitative parameter map of the acquisition object may be determined by combining the first diffusion weighted image based on actual acquisition with the preliminary quantitative parameter map. In the embodiment of the invention, the accuracy of the target quantitative parameter map is higher than that of the preliminary quantitative parameter map.
Specifically, the target quantitative parameter map includes a target quantitative indicator of a diffusion tensor corresponding to the first diffusion weighted image. As previously described, if the acquisition object comprises a heart, the target quantization index may comprise at least one of average diffusivity, anisotropy fraction, and pitch angle.
In the embodiment of the invention, a conventional algorithm can be adopted to process the image based on the preliminary quantitative parameter map and the first diffusion weighted image so as to obtain the target quantitative parameter map of the acquisition object. In view of the nonlinear relationship between quantitative parameter maps and diffusion-weighted images, a neural network model may be employed to determine a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion-weighted image.
According to the technical scheme, only a small number of diffusion weighted images are required to be acquired by acquiring the reference images without diffusion weighting of the acquisition object and the first diffusion weighted images in the diffusion gradient coding direction, so that the image acquisition time is saved; generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image; determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image, and generating images in other directions by utilizing correlations among weighted images in different directions and solving the quantitative parameter map; and determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image so as to improve the accuracy of the quantitative parameter map. The method solves the technical problem that the acquisition time of diffusion tensor imaging is too long, can generate images in other directions and perform tensor fitting based on a small amount of diffusion weighted images, and has the advantages of reducing the repeated acquisition times of the images, saving the image acquisition time and outputting a high-quality target quantitative parameter map.
Fig. 2 is a flow chart of another fast diffusion tensor imaging method according to an embodiment of the present invention. The present embodiment refines how to determine the second diffusion weighted image and how to determine the target quantitative parameter map of the acquisition object on the basis of the above embodiments.
On the basis of any optional aspect of the invention, optionally, the generating, based on the first diffusion weighted image and the first diffusion weighted image, a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions includes: generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image, the first diffusion weighted image, and a pre-trained first model; the first model comprises at least one image processing unit, the image processing unit comprises an image generation layer and an image correction layer which are connected in series, the image generation layer is constructed based on a diffusion tensor imaging fitting model, and the image correction layer is constructed based on a neural network model.
On the basis of any optional technical solution of the present invention, optionally, the determining, based on the preliminary quantitative parameter map and the first diffusion weighted image, a target quantitative parameter map of the acquisition object includes: and inputting the preliminary quantitative parameter map and the first diffusion weighted image into a pre-trained second model to obtain a target quantitative parameter map of the acquisition object.
Reference is made to the description of this example for a specific implementation. The technical features that are the same as or similar to those of the foregoing embodiments are not described herein.
As shown in fig. 2, the method includes:
s210, acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image of a first quantity of diffusion gradient coding directions.
S220 generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions based on the reference image, the first diffusion weighted image and a pre-trained first model.
The first model comprises at least one image processing unit, the image processing unit comprises an image generation layer and an image correction layer which are connected in series, the image generation layer is constructed based on a diffusion tensor imaging fitting model, and the image correction layer is constructed based on a neural network model.
In an embodiment of the invention, the first model comprises one, two or more image processing units. Each image processing unit may be composed of an image generation layer and an image correction layer connected in series. Wherein the image generation layer is operable to generate a diffusion weighted image of the other diffusion gradient encoding direction based on the reference image and the first diffusion weighted image. The image correction layer can be used for correcting the diffusion weighted image generated by the image generation layer to obtain a corrected diffusion weighted image.
Optionally, the first model comprises a first image processing unit. Specifically, the reference image and the first diffusion-weighted image may be input to an image generation layer of the first image processing unit; generating, by a diffusion tensor imaging fitting model in the image generation layer, a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions; and inputting the preliminary diffusion weighted image into the image correction layer to obtain a corrected diffusion weighted image, and determining a second diffusion weighted image based on the corrected diffusion weighted image.
Specifically, the generating, by using the diffusion tensor imaging fitting model in the image generation layer, a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions may include: solving a first diffusion tensor corresponding to the first diffusion weighted image through a diffusion tensor calculation model in the image generation layer; generating a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions by the first diffusion tensor, the reference image and the diffusion tensor imaging fitting model.
As previously described, the diffusion tensor imaging fitting model may be as shown in equation (1). The diffusion tensor calculation model can be further solved based on the diffusion tensor imaging fitting model, as shown in a formula (3).
And then inputting the preliminary diffusion weighted image into the image correction layer to obtain a corrected diffusion weighted image, and determining a second diffusion weighted image based on the corrected diffusion weighted image. In an embodiment of the present invention, determining the second diffusion-weighted image based on the modified diffusion-weighted image may be taking the modified diffusion-weighted image as the second diffusion-weighted image; the modified diffusion-weighted image may also be further processed to obtain a second diffusion-weighted image.
Optionally, on this basis, the first model further comprises a second image processing unit in series with the first image processing unit. Accordingly, the determining a second diffusion weighted image based on the modified diffusion weighted image may include: inputting the corrected diffusion-weighted image and the first diffusion-weighted image into an image generation layer of the second image processing unit; generating, by a diffusion tensor imaging fitting model in the image generation layer, a transition diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions; and inputting the transition diffusion weighted image into a second image correction layer of the second image processing unit to obtain a second diffusion weighted image.
Similarly, generating, by means of a diffusion tensor imaging fitting model in the image generation layer, a transitional diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions may specifically include: solving a second diffusion tensor corresponding to the first diffusion weighted image and the second diffusion weighted image through a diffusion tensor calculation model in the image generation layer; and generating transition diffusion weighted images of a second number of other diffusion gradient coding directions except the first number of diffusion gradient coding directions through the second diffusion tensor, the reference image and the diffusion tensor imaging fitting model. And then, inputting the transition diffusion weighted image into a second image correction layer of the second image processing unit to obtain a second diffusion weighted image.
Wherein the first image generation layer and the second image generation layer may be derived based on U-net training. Illustratively, each U-net has a 4-layer encoding and 4-layer decoding structure, and a skip connection layer may be disposed between each encoding and decoding layer for information transmission.
Fig. 3 is a schematic diagram showing an example of a model structure of a first model employed for performing the fast diffusion tensor imaging method according to the embodiment of the present invention. In this example, for ease of description, the first model is simply referred to as G-Net. In the G-Net, a diffusion tensor imaging fitting model is integrated so as to keep the data consistency between diffusion weighted images of other diffusion gradient encoding directions and the diffusion weighted images of all diffusion gradient encoding directions which are originally acquired. The G-Net also comprises two U-Net networks, and can be adjusted for the generated diffusion weighted images in other diffusion gradient encoding directions so that the generated diffusion weighted images are more similar to the diffusion weighted images in the diffusion gradient encoding directions which are actually acquired.
Specifically, firstly, calculating by using diffusion weighted images of original 6 diffusion gradient coding directions and a reference image without diffusion weighting through a calculation formula (3) of a diffusion tensor calculation model to obtain a first diffusion tensor D init, and then using a diffusion weighted image generating a group of other diffusion gradient coding directions according to a calculation formula (1) of a diffusion tensor imaging fitting modelWhere k=7, …, N d,Nd is the total number of diffusion gradient encoding directions of the diffusion weighted image to be acquired. Since there are already diffusion weighted images of the original 6 diffusion gradient encoding directions, the first U-net network generates N d -6 diffusion weighted images G 1(·;θ1). Combining the generated N d -6 diffusion weighted images with the originally acquired 6 diffusion weighted images, calculating by a calculation formula (3) of a diffusion tensor calculation model to obtain an intermediate tensor D correct, and generating a series of diffusion weighted images/>, according to D correct, by using a second U-netK=7, …, N d. Finally, the original 6 diffusion-weighted images and the generated/>Together as the output of the second U-net.
In the embodiment of the invention, the first model can be obtained by taking the sample diffusion weighted image and the reference image corresponding to the sample diffusion weighted image as training samples and training the pre-established first model based on the training samples. The model parameters of the first model may be adjusted based on a loss value calculated by a loss function of the first model. The loss of the first model includes a loss of the first image correction layer and a loss of the second image correction layer.
Specifically, the loss function of the first model is calculated by the following formula:
wherein, Representing a loss function of the first model,/>Θ 1 represents the network parameters learned by the first image modification layer during model training,/>, representing the sample image of the input of the first image modification layer in the first modelA layer output image representing the first image correction layer; /(I)Representing an input image of the second image modification layer in the first model, θ 2 representing network parameters learned by the second image modification layer during model training,A layer output image representing the second image correction layer; y n denotes a diffusion-weighted image that is desired to be output by the first image correction layer and the second image correction layer, N denotes the total number of sample images in the training dataset, N denotes the nth sample image, and λ 1 and λ 2 denote regularization coefficients for balancing the first image correction layer and the second image correction layer.
In diffusion tensor imaging research, a neural network is used for learning a nonlinear relation between a diffusion weighted image and a corresponding quantitative parameter map, a tensor fitting step is bypassed, the neural network is used for replacing traditional fitting to avoid remarkable influence of noise and motion in the weighted image on the quantitative parameter, and the quantitative parameter map can be fitted through 6 diffusion weighted images in different diffusion gradient coding directions. The network employs an encoder-decoder network of U-net architecture. The hierarchical structure may represent a highly complex nonlinear model, with different network parameters representing different models. To obtain optimal performance and to increase robustness to noise, residual learning and image block-based training may be employed.
In diffusion tensor imaging, acquisition of high quality data is very difficult, often supporting training of conventional networks. According to the technical scheme provided by the embodiment of the invention, the physical model of diffusion tensor imaging is embedded in the first model, the correlation among diffusion weighted images in different diffusion gradient coding directions is utilized, the diffusion weighted images in other diffusion gradient coding directions are generated, the quantitative parameter map is solved, a large number of training samples are not needed, and meanwhile, the interpretability of the first model is improved.
S230, determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image.
S240, inputting the preliminary quantitative parameter map and the first diffusion weighted image into a pre-trained second model to obtain a target quantitative parameter map of the acquisition object.
The second model may be a neural network model which is trained in advance and can process images. Illustratively, the second model may be a modified ResNet. Alternatively, the second model may consist of a convolutional layer (Conv) and a nonlinear activation function (e.g., PReLU function) and contain a jump connection layer. The advantage of this arrangement is that the gradient vanishing or gradient explosion during training can be effectively avoided.
In the embodiment of the invention, the second model can be obtained by training a pre-established initial neural network model based on a training sample by taking the sample diffusion weighted image and a sample quantitative parameter map corresponding to the sample diffusion weighted image as the training sample. The sample quantitative parameter map can be obtained by fitting a sample diffusion weighted image and a second diffusion weighted image of other diffusion gradient coding directions generated according to the sample diffusion weighted image.
Fig. 4 is a schematic diagram showing an example of a model structure of an overall model employed for performing the fast diffusion tensor imaging method according to the embodiment of the present invention. The second model may be a fitted Network, which may be denoted as a Fitting Network, abbreviated as F-Net, as shown in FIG. 4. In the training process of the fitting network, network parameters of the fitting network can be adjusted based on a preset loss function so as to obtain a second model. Specifically, a loss value between the quantitative parameter map actually output by the second model and the quantitative parameter map expected to be output by the second model can be calculated through a loss function of the second model, and then the network parameters of the second model are adjusted according to the loss value.
Optionally, the loss function of the second model is calculated by the following formula:
wherein, Representing a loss function of a second model, F (x n; θ) representing a quantitative parameter map actually output by the second model in the case where a sample image x n is input in the second model, the sample image including a sample quantitative parameter map and a sample dispersion weighted image, θ representing network parameters learned by the second model during model training, Z n representing a quantitative parameter map expected to be output by the second model, N representing the total number of sample images in a training dataset, and N representing an nth sample image.
The end condition of the second model training may be convergence of a loss function of the second model, or the number of training iterations of the second model reaches a preset number of times threshold, or the like.
During the whole network (first model+second model, FG network for short) training process, 80% of data can be used as training data, and 20% of data can be used as verification data. The network may be trained using an adaptive moment estimation Adam algorithm to minimize the loss function. For G-Net, the training parameters are as follows: learning rate=0.0001, weight decay=0.0, batch size=1. To prevent overfitting, the training was stopped after 30 cycles when the validation loss stabilized.
In order to train the whole network, in particular, the loss function of the FG network may be set toTraining the first feature vectors of the FA, the MD and the HA on the FG network, and taking a target quantitative parameter diagram corresponding to the input dispersion weighted image as the final output of the FG network. Illustratively, the learning rate of the FG network may be set to 0.0001, and the weight decay=0.0001. Batch size = 1. To prevent overfitting, training may be stopped for 40 cycles after the validation loss stabilizes (converges).
According to the technical scheme, the first model can be used for generating diffusion weighted images in other diffusion gradient encoding directions and fitting diffusion tensors based on a small amount of heart diffusion tensor images, the diffusion weighted images in other diffusion gradient encoding directions are combined with the diffusion weighted images in the existing 6 diffusion gradient encoding directions to obtain a preliminary quantitative parameter map, the preliminary quantitative parameter map is combined with the diffusion weighted images in the original 6 diffusion gradient encoding directions, a second model is combined to directly output a high-quality target quantitative parameter map, imaging time is saved, and the target quantitative parameter map can be output efficiently and accurately.
FIG. 5 is a graph showing the comparative effects of quantitative parameter graphs obtained by conventional methods and methods according to embodiments of the present invention, respectively. First, data acquisition is performed based on a magnetic resonance imaging apparatus. Specifically, the rapid spin echo is adopted to collect diffusion weighted images of the isolated heart, 16 diffusion weighted images in the diffusion gradient coding direction are respectively collected, 8 times of repeated collection are carried out, and the magnetic resonance diffusion sensitivity factor B value is 800 s/mm 2. Then, based on the diffusion weighted images of 16 diffusion gradient encoding directions, the quantitative characteristic parameter map obtained by the conventional linear least square fitting is a Reference standard (see image columns corresponding to Reference in fig. 5), and the target quantitative parameter map (see image columns corresponding to FG-Net in fig. 5) is obtained by using the rapid diffusion tensor imaging method according to the embodiment of the present invention through the diffusion weighted images of 6 diffusion gradient encoding directions. The FA, MD and HA are quantitative parameter graphs finally output. Comparing and verifying the quantitative parameter graphs obtained by the two methods, and the verification result shows that the quantitative parameter graphs and the quantitative parameter graphs have high consistency, and the quantitative Error is within 4% (see an image column corresponding to Error in fig. 5). Therefore, compared with the traditional method, the rapid diffusion tensor imaging method provided by the embodiment of the invention has the advantages that the error is smaller, and the average absolute error after normalization is lower.
As shown in FIG. 6, the diffusion weighted image generated in the first model is compared with the diffusion weighted image actually acquired, so that the error is smaller, the image acquisition time is saved, and a foundation is laid for obtaining a high-precision target quantitative parameter map.
Fig. 7 is a schematic structural diagram of a fast diffusion tensor imaging device according to an embodiment of the present invention. The apparatus may be implemented in software and/or hardware. As shown in fig. 7, the apparatus includes: the system comprises a diffusion image acquisition module 310, a diffusion image generation module 320, a preliminary quantitative parameter map determination module 330 and a target quantitative parameter map determination module 340.
Wherein, the diffusion image acquisition module 310 is configured to acquire a reference image without diffusion weighting of an acquisition object and a first diffusion weighted image of a first number of diffusion gradient encoding directions; a diffusion image generation module 320 for generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image; a preliminary quantitative parameter map determining module 330, configured to determine a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image; a target quantitative parameter map determining module 340, configured to determine a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image.
According to the technical scheme, the diffusion image acquisition module is used for acquiring the reference images without diffusion weighting of the acquisition object and the first diffusion weighted images in the first number of diffusion gradient encoding directions, only a small number of diffusion weighted images are required to be acquired, and the image acquisition time is saved; generating, by a diffusion image generation module, a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image; determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image by a preliminary quantitative parameter map determining module, and generating images in other directions and solving the quantitative parameter map by utilizing the correlation among weighted images in different directions; and determining a target quantitative parameter map of the acquisition object by a target quantitative parameter map determining module based on the preliminary quantitative parameter map and the first diffusion weighted image so as to improve the accuracy of the quantitative parameter map. The method solves the technical problem that the acquisition time of diffusion tensor imaging is too long, can generate images in other directions and perform tensor fitting based on a small amount of diffusion weighted images, and has the advantages of reducing the repeated acquisition times of the images, saving the image acquisition time and outputting a high-quality target quantitative parameter map.
Optionally, the diffusion image generation module is specifically configured to:
Generating a second diffusion weighted image for a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the first diffusion weighted image and the first diffusion weighted image, comprising:
Generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image, the first diffusion weighted image, and a pre-trained first model;
the first model comprises at least one image processing unit, the image processing unit comprises an image generation layer and an image correction layer which are connected in series, the image generation layer is constructed based on a diffusion tensor imaging fitting model, and the image correction layer is constructed based on a neural network model.
Optionally, the first model includes a first image processing unit; correspondingly, the diffusion image generation module specifically may include: an image input sub-module, an image generation sub-module, and an image correction sub-module.
Wherein the image input sub-module is used for inputting the reference image and the first diffusion weighted image into the image generation layer of the first image processing unit; an image generation sub-module for generating, by means of a diffusion tensor imaging fitting model in the image generation layer, a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions; and the image correction sub-module is used for inputting the preliminary diffusion weighted image into the image correction layer to obtain a corrected diffusion weighted image, and determining a second diffusion weighted image based on the corrected diffusion weighted image.
Optionally, the first model further includes a second image processing unit connected in series with the first image processing unit; accordingly, the image correction sub-module is configured to be specifically configured to:
inputting the corrected diffusion-weighted image and the first diffusion-weighted image into an image generation layer of the second image processing unit;
Generating, by a diffusion tensor imaging fitting model in the image generation layer, a transition diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions;
And inputting the transition diffusion weighted image into a second image correction layer of the second image processing unit to obtain a second diffusion weighted image.
Optionally, the loss function of the first model is calculated by the following formula:
wherein, Representing a loss function of the first model,/>Θ 1 represents the network parameters learned by the first image modification layer during model training,/>, representing the sample image of the input of the first image modification layer in the first modelA layer output image representing the first image correction layer; /(I)Representing an input image of the second image modification layer in the first model, θ 2 representing network parameters learned by the second image modification layer during model training,A layer output image representing the second image correction layer; y n denotes a diffusion-weighted image that is desired to be output by the first image correction layer and the second image correction layer, N denotes the total number of sample images in the training dataset, N denotes the nth sample image, and λ 1 and λ 2 denote regularization coefficients for balancing the first image correction layer and the second image correction layer.
Optionally, the image generation submodule is specifically configured to:
Solving a first diffusion tensor corresponding to the first diffusion weighted image through a diffusion tensor calculation model in the image generation layer;
Generating a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions by the first diffusion tensor, the reference image and the diffusion tensor imaging fitting model.
Optionally, the target quantitative parameter map determining module is specifically configured to:
and inputting the preliminary quantitative parameter map and the first diffusion weighted image into a pre-trained second model to obtain a target quantitative parameter map of the acquisition object.
Optionally, the loss function of the second model is calculated by the following formula:
wherein, Representing a loss function of a second model, F (x n; θ) representing a quantitative parameter map actually output by the second model in the case where a sample image x n is input in the second model, the sample image including a sample quantitative parameter map and a sample dispersion weighted image, θ representing network parameters learned by the second model during model training, Z n representing a quantitative parameter map expected to be output by the second model, N representing the total number of sample images in a training dataset, and N representing an nth sample image.
Optionally, the target quantitative parameter map includes a target quantitative indicator of a diffusion tensor corresponding to the first diffusion weighted image, and if the acquisition object includes a heart, the target quantitative indicator includes at least one of average diffusivity, anisotropy fraction, and helix angle.
The rapid diffusion tensor imaging device provided by the embodiment of the invention can execute the rapid diffusion tensor imaging method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the rapid diffusion tensor imaging method.
Fig. 8 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 8, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as method fast diffusion tensor imaging.
In some embodiments, the method fast diffusion tensor imaging may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the method fast diffusion tensor imaging described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the method fast diffusion tensor imaging by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of fast diffusion tensor imaging comprising:
Acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image of a first quantity of diffusion gradient coding directions;
Generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image;
Determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion-weighted image and the second diffusion-weighted image;
And determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image.
2. The method of claim 1, wherein the generating a second diffusion weighted image for a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the first diffusion weighted image and the first diffusion weighted image comprises:
Generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions based on the reference image, the first diffusion weighted image, and a pre-trained first model;
the first model comprises at least one image processing unit, the image processing unit comprises an image generation layer and an image correction layer which are connected in series, the image generation layer is constructed based on a diffusion tensor imaging fitting model, and the image correction layer is constructed based on a neural network model.
3. The method of claim 2, wherein the first model comprises a first image processing unit;
The generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions based on the reference image, the first diffusion weighted image, and a pre-trained first model, comprising:
Inputting the reference image and the first diffusion-weighted image into an image generation layer of the first image processing unit;
Generating, by a diffusion tensor imaging fitting model in the image generation layer, a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions;
and inputting the preliminary diffusion weighted image into the image correction layer to obtain a corrected diffusion weighted image, and determining a second diffusion weighted image based on the corrected diffusion weighted image.
4. A method according to claim 3, wherein the first model further comprises a second image processing unit in series with the first image processing unit;
the determining a second diffusion weighted image based on the modified diffusion weighted image comprises:
inputting the corrected diffusion-weighted image and the first diffusion-weighted image into an image generation layer of the second image processing unit;
Generating, by a diffusion tensor imaging fitting model in the image generation layer, a transition diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions;
and inputting the transition diffusion weighted image into an image correction layer of the second image processing unit to obtain a second diffusion weighted image.
5. The method of claim 4, wherein the loss function of the first model is calculated by the formula:
wherein, Representing a loss function of the first model,/>Θ 1 represents a network parameter learned by the image correction layer of the first image processing unit during model training,/>, representing a sample image of the input of the image correction layer of the first image processing unit in the first modelA layer output image representing an image correction layer of the first image processing unit; /(I)Input image representing the image correction layer of the second image processing unit in the first model, θ 2 representing the network parameters learned by the image correction layer of the second image processing unit during model training,/>A layer output image representing an image correction layer of the second image processing unit; y n denotes a diffusion weighted image desired to be output by the image correction layer of the first image processing unit and the image correction layer of the second image processing unit, N denotes the total number of sample images in the training dataset, N denotes the nth sample image, and λ 1 and λ 2 denote regularization coefficients for balancing the image correction layer of the first image processing unit and the image correction layer of the second image processing unit.
6. A method according to claim 3, wherein said generating preliminary diffusion weighted images of a second number of other diffusion gradient encoding directions than said first number of diffusion gradient encoding directions by means of a diffusion tensor imaging fitting model in said image generation layer comprises:
Solving a first diffusion tensor corresponding to the first diffusion weighted image through a diffusion tensor calculation model in the image generation layer;
Generating a preliminary diffusion weighted image of a second number of other diffusion gradient encoding directions in addition to the first number of diffusion gradient encoding directions by the first diffusion tensor, the reference image and the diffusion tensor imaging fitting model.
7. The method of claim 1, wherein the determining the target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image comprises:
and inputting the preliminary quantitative parameter map and the first diffusion weighted image into a pre-trained second model to obtain a target quantitative parameter map of the acquisition object.
8. The method of claim 7, wherein the loss function of the second model is calculated by the formula:
wherein, Representing a loss function of a second model, F (x n; θ) representing a quantitative parameter map actually output by the second model in the case where a sample image x n is input in the second model, the sample image including a sample quantitative parameter map and a sample dispersion weighted image, θ representing network parameters learned by the second model during model training, Z n representing a quantitative parameter map expected to be output by the second model, N representing the total number of sample images in a training dataset, and N representing an nth sample image.
9. The method of claim 1, wherein the target quantitative parameter map includes a target quantitative indicator of a diffusion tensor corresponding to the first diffusion weighted image, and wherein the target quantitative indicator includes at least one of an average diffusivity, an anisotropy fraction, and a helix angle if the acquisition object includes a heart.
10. A fast diffusion tensor imaging apparatus, comprising:
The diffusion image acquisition module is used for acquiring a reference image without diffusion weighting of an acquisition object and a first diffusion weighting image in a first number of diffusion gradient coding directions;
A diffusion image generation module for generating a second diffusion weighted image of a second number of other diffusion gradient encoding directions than the first number of diffusion gradient encoding directions based on the reference image and the first diffusion weighted image;
The preliminary quantitative parameter map determining module is used for determining a preliminary quantitative parameter map of the acquisition object based on the first diffusion weighted image and the second diffusion weighted image;
And the target quantitative parameter map determining module is used for determining a target quantitative parameter map of the acquisition object based on the preliminary quantitative parameter map and the first diffusion weighted image.
CN202211485863.3A 2022-11-24 2022-11-24 Quick diffusion tensor imaging method and device Pending CN118115608A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211485863.3A CN118115608A (en) 2022-11-24 2022-11-24 Quick diffusion tensor imaging method and device
PCT/CN2023/133052 WO2024109757A1 (en) 2022-11-24 2023-11-21 Fast diffusion tensor imaging method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211485863.3A CN118115608A (en) 2022-11-24 2022-11-24 Quick diffusion tensor imaging method and device

Publications (1)

Publication Number Publication Date
CN118115608A true CN118115608A (en) 2024-05-31

Family

ID=91195308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211485863.3A Pending CN118115608A (en) 2022-11-24 2022-11-24 Quick diffusion tensor imaging method and device

Country Status (2)

Country Link
CN (1) CN118115608A (en)
WO (1) WO2024109757A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942489B (en) * 2018-09-25 2023-04-25 西门子医疗系统有限公司 Magnetic resonance diffusion tensor imaging method and device and fiber bundle tracking method and device
US20220011392A1 (en) * 2018-11-20 2022-01-13 Koninklijke Philips N.V. Diffusion magnetic resonance imaging using spherical neural networks
US11874359B2 (en) * 2019-03-27 2024-01-16 The General Hospital Corporation Fast diffusion tensor MRI using deep learning
CN111445546B (en) * 2020-03-03 2023-05-02 东软医疗系统股份有限公司 Image reconstruction method, device, electronic equipment and storage medium
CN114373095A (en) * 2021-12-09 2022-04-19 山东师范大学 Alzheimer disease classification system and method based on image information
CN115359013A (en) * 2022-08-25 2022-11-18 西安交通大学 Brain age prediction method and system based on diffusion tensor imaging and convolutional neural network

Also Published As

Publication number Publication date
WO2024109757A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
CN108896943B (en) Magnetic resonance quantitative imaging method and device
CN112132959B (en) Digital rock core image processing method and device, computer equipment and storage medium
WO2020114329A1 (en) Fast magnetic resonance parametric imaging and device
CN109658468B (en) Magnetic resonance parameter imaging method, device, equipment and storage medium
CN113516230B (en) Automatic convolutional neural network pruning method based on average rank importance ordering
US9047566B2 (en) Quadratic regularization for neural network with skip-layer connections
CN114937025A (en) Image segmentation method, model training method, device, equipment and medium
CN111445546A (en) Image reconstruction method and device, electronic equipment and storage medium
CN110874855B (en) Collaborative imaging method and device, storage medium and collaborative imaging equipment
CN115019128A (en) Image generation model training method, image generation method and related device
CN116843679B (en) PET image partial volume correction method based on depth image prior frame
CN117635444A (en) Depth completion method, device and equipment based on radiation difference and space distance
CN116563096B (en) Method and device for determining deformation field for image registration and electronic equipment
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
CN118115608A (en) Quick diffusion tensor imaging method and device
CN115601759A (en) End-to-end text recognition method, device, equipment and storage medium
CN115375583A (en) PET parameter image enhancement method, device, equipment and storage medium
CN114998273A (en) Blood vessel image processing method and device, electronic equipment and storage medium
CN116912264A (en) Image registration segmentation joint model training method and device
CN116933896B (en) Super-parameter determination and semantic conversion method, device, equipment and medium
WO2023029087A1 (en) Low-field fast magnetic resonance imaging method, terminal device, and computer storage medium
US20230298326A1 (en) Image augmentation method, electronic device and readable storage medium
CN114862739B (en) Intelligent medical image enhancement method and system
CN116051935B (en) Image detection method, training method and device of deep learning model
CN117727031A (en) Image processing method, device, equipment and storage medium based on time sequence relation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination