CN114450599A - Maxwell parallel imaging - Google Patents

Maxwell parallel imaging Download PDF

Info

Publication number
CN114450599A
CN114450599A CN202080064460.4A CN202080064460A CN114450599A CN 114450599 A CN114450599 A CN 114450599A CN 202080064460 A CN202080064460 A CN 202080064460A CN 114450599 A CN114450599 A CN 114450599A
Authority
CN
China
Prior art keywords
sample
magnetic field
computer
coil
measurement device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202080064460.4A
Other languages
Chinese (zh)
Other versions
CN114450599B (en
Inventor
J·费尔南德斯维莱纳
S·莱夫基米亚蒂斯
A·保利梅里迪斯
D·泰利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Q Bio Inc
Original Assignee
Q Bio Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Q Bio Inc filed Critical Q Bio Inc
Publication of CN114450599A publication Critical patent/CN114450599A/en
Application granted granted Critical
Publication of CN114450599B publication Critical patent/CN114450599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/565Correction of image distortions, e.g. due to magnetic field inhomogeneities
    • G01R33/56572Correction of image distortions, e.g. due to magnetic field inhomogeneities caused by a distortion of a gradient magnetic field, e.g. non-linearity of a gradient magnetic field
    • G01R33/56581Correction of image distortions, e.g. due to magnetic field inhomogeneities caused by a distortion of a gradient magnetic field, e.g. non-linearity of a gradient magnetic field due to Maxwell fields, i.e. concomitant fields
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/50NMR imaging systems based on the determination of relaxation times, e.g. T1 measurement by IR sequences; T2 measurement by multiple-echo sequences
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/561Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
    • G01R33/5611Parallel magnetic resonance imaging, e.g. sensitivity encoding [SENSE], simultaneous acquisition of spatial harmonics [SMASH], unaliasing by Fourier encoding of the overlaps using the temporal dimension [UNFOLD], k-t-broad-use linear acquisition speed-up technique [k-t-BLAST], k-t-SENSE
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A computer is described that determines coefficients in a representation of coil sensitivities and MR information associated with a sample. During operation, the computer may acquire MR signals associated with a sample from the measurement device. The computer may then access a set of predetermined coil magnetic field basis vectors, wherein coil sensitivities of coils in the measurement device are represented using a weighted superposition of the set of predetermined coil magnetic field basis vectors of the coefficients, and wherein the predetermined coil magnetic field basis vectors are solutions to Maxwell's equations. Next, the computer may solve a non-linear optimization problem of the MR information and the coefficients associated with the sample using the set of MR signals and predetermined coil magnetic field basis vectors.

Description

Maxwell parallel imaging
Cross Reference to Related Applications
Priority of united states provisional application No. 62/907,516 entitled "maxwell parallel imaging", filed 2019, 9, 27, § 119(e), which is incorporated herein by reference in its entirety, is claimed in this application according to 35u.s.c.
Background
Technical Field
The described embodiments relate generally to accelerating analysis of magnetic resonance measurements.
Background
A number of non-invasive characterization techniques are available for determining one or more physical parameters of a sample. For example, magnetic resonance or MR (often referred to as "nuclear magnetic resonance" or NMR) can be used to study magnetic properties, a physical phenomenon in which nuclei in a magnetic field absorb and re-emit electromagnetic radiation. Furthermore, characterization techniques (such as x-ray imaging, x-ray diffraction, computed tomography, neutron diffraction, or electron microscopy) can be used to study density variations and short-or long-range periodic structures in solid or rigid materials, where electromagnetic waves or energy particles with small de broglie wavelengths are absorbed or scattered by the sample. In addition, density variations and motion in soft materials or fluids can be studied using ultrasonic imaging, where ultrasonic waves are transmitted and reflected in the sample.
In each of these and other non-invasive characterization techniques, one or more external stimuli (such as particle flux or incident radiation, a static or time-varying scalar field, and/or a static or time-varying vector field) are applied to a sample, and the resulting sample response is measured in the form of a physical phenomenon to determine, directly or indirectly, one or more physical parameters. For example, in MR, the magnetic nuclear spins may be partially aligned (or polarized) in an applied external DC magnetic field. These nuclear spins can precess or rotate around the direction of the external magnetic field at an angular frequency (sometimes referred to as the "larmor frequency"), which is given by the product of the gyromagnetic ratio of a class of nuclei and the magnitude or strength of the external magnetic field. By applying perturbations to the polarized nuclear spins, such as one or more Radio Frequency (RF) pulses (more generally, electromagnetic pulses) whose pulse width corresponds to the angular frequency and is at right angles or perpendicular to the direction of the external magnetic field, the polarization of the nuclear spins can be changed temporally. The resulting nuclear spin dynamic response (e.g., time-varying total magnetization) may provide information about the physical and material properties of the sample, such as one or more physical parameters associated with the sample.
Furthermore, in general, each characterization technique may allow one or more physical parameters to be determined in small volumes or voxels in the sample, which may be represented using a tensor. Using Magnetic Resonance Imaging (MRI) as an example, nuclear spins (e.g. protons or isotopes)1H) The dependence of the precession angular frequency on the magnitude of the external magnetic field can be used to determine three-dimensional (3D) or anatomical and/or chemical composition images of different materials or tissue types. In particular by applying a non-uniform or spatially varying magnetic field to the sample1Variation of the precession angular frequency of the H spins is commonly used for spatial localization1The measured dynamic response of the H spins to the voxels, which can be used to generate images, such as the internal anatomy of a patient.
However, characterization of physical properties of a sample is often time consuming, complex and expensive. For example, acquiring MR images in MRI with high spatial resolution (i.e., small voxel size) often involves a large number of measurements (sometimes referred to as "scans") to be performed, which are longer in duration than in different types of tissue of the patient1Relaxation time of the H spins. Furthermore, to obtain high spatial resolution, large uniform external magnetic fields are typically used during MRI. The external magnetic field is typically generated using a superconducting magnet, which has an annular shape with a narrow bore, which is perceived as limited by many patients. Furthermore, fourier transform techniques can be used to facilitate image reconstruction at the expense of constraints on the RF pulse sequence and hence MR scan time.
The long MR scan time and, in the case of MRI, the closed environment of the magnet bore can degrade the user experience. Furthermore, long MR scan times can reduce throughput, thereby increasing the cost of performing characterization. These types of problems can restrict or limit the use of many characterization techniques.
Disclosure of Invention
A computer is described that determines coefficients in a representation of coil sensitivities and MR information associated with a sample. This computer includes: an interface circuit in communication with the measurement device (which performs the measurement), a processor that executes program instructions, and a memory that stores the program instructions. During operation, the computer may acquire MR signals associated with the sample from the measurement device. The computer may then access a set of predetermined coil magnetic field basis vectors, wherein the coil sensitivities of the coils in the measurement device are represented using a weighted superposition of the set of predetermined coil magnetic field basis vectors of coefficients, and wherein the predetermined coil magnetic field basis vectors are solutions to Maxwell's equations. Next, the computer may solve a nonlinear optimization problem of MR information and coefficients associated with the sample using the MR signals and a set of predetermined coil magnetic field basis vectors.
Note that a given coil sensitivity may be represented by a linear superposition of coefficients and products of predetermined coil magnetic field basis vectors of a set of predetermined coil magnetic field basis vectors.
Furthermore, the non-linear optimization problem may include a term corresponding to the squared absolute value of the difference between the MR signal and the estimated MR signal corresponding to the MR information. The term may include or may be incorporated with a contribution from the coil sensitivity of the coil in the measurement device. Further, the nonlinear optimization problem may include one or more constraints on the reduction or minimization of terms, such as one or more regularisers corresponding to the spatial distribution of MR information.
Furthermore, the MR information may include a spatial distribution of one or more MR parameters in a voxel (e.g., in an image) associated with the sample, the voxel being specified by the MR signals. For example, the MR information may include nuclear density. Thus, the measurement device may be an MR scanner performing MRI or another MR measurement technique.
In some embodiments, the MR information may include quantitative values of one or more MR parameters in voxels associated with the sample, the voxels being specified by the MR signals. For example, the MR information may include: nuclear density, spin lattice relaxation time in the direction of the external magnetic field and/or spin-spin relaxation time perpendicular to the direction of the external magnetic field, adjusted spin-spin relaxation time. Thus, the measurement device and subsequent analysis by the computer may include: tensor field mapping, MR fingerprinting, or another quantitative MR measurement technique.
Note that the non-linear optimization problem may be solved iteratively (e.g., until a convergence criterion is reached). However, in other embodiments, the non-linear optimization problem may be solved using a pre-trained neural network or a pre-trained machine learning model that maps the set of MR signals and coil magnetic field basis vectors to the spatial distribution and coefficients of the MR information. Thus, in some embodiments, the nonlinear optimization problem may be solved without iteration.
Further, the operations performed by the computer may allow for skipping multiple MR scan lines in a measurement made by the measurement device and then reconstructing when solving the non-linear optimization technique. Separately or in addition to reducing the time required to solve the nonlinear optimization problem, this may reduce the MR scan time associated with the measurements performed by the measurement device.
Another embodiment provides a computer-readable storage medium for use with a computer. This computer-readable storage medium includes program instructions that, when executed by a computer, cause the computer to perform at least some of the operations described above.
Another embodiment provides a method for determining coefficients in a representation of coil sensitivities and MR information associated with a sample. This method includes at least some of the foregoing operations performed by the computer.
This summary is provided to illustrate some exemplary embodiments in order to provide a basic understanding of some aspects of the subject matter described herein. Accordingly, it should be understood that the above-described features are merely examples and should not be construed to narrow the scope or spirit of the subject matter described herein in any way. Other features, aspects, and advantages of the subject matter described herein will become apparent from the following detailed description, the accompanying drawings, and the claims.
Drawings
Fig. 1 is a block diagram illustrating an example of a system according to an embodiment of the present disclosure.
Fig. 2 is a flow diagram illustrating an example of a method for determining model parameters associated with a sample in accordance with an embodiment of the present disclosure.
Fig. 3 is a diagram illustrating an example of communication between components in the system in fig. 1, according to an embodiment of the present disclosure.
Fig. 4 is a diagram illustrating an example of a machine learning model according to an embodiment of the present disclosure.
Fig. 5 is a diagram illustrating an example of a neural model according to an embodiment of the present disclosure.
Fig. 6 is a diagram illustrating an example of classification or segmentation of one or more anatomical structures in a sample according to an embodiment of the present disclosure.
Fig. 7 is a flow chart illustrating an example of a method for determining coefficients in a representation of coil sensitivities and MR information associated with a sample, in accordance with an embodiment of the present disclosure.
Fig. 8 is a diagram illustrating an example of communication between components in the system in fig. 1, according to an embodiment of the present disclosure.
Fig. 9 is a block diagram illustrating an example of an electronic device according to an embodiment of the present disclosure.
Fig. 10 is a diagram illustrating an example of a data structure used by the electronic device of fig. 9 according to an embodiment of the present disclosure.
Note that like reference numerals refer to corresponding parts throughout the drawings. Additionally, multiple instances of the same part are designated by a common prefix, separated from the instance number by a dash.
Detailed Description
In a first set of embodiments, a computer is described that determines coefficients in a representation of coil sensitivities and MR information associated with a sample. During operation, the computer may acquire MR signals associated with the sample from the measurement device. The computer may then access a set of predetermined coil magnetic field basis vectors, wherein the coil sensitivities of the coils in the measurement device are represented using a weighted superposition of the set of predetermined coil magnetic field basis vectors of coefficients, and wherein the predetermined coil magnetic field basis vectors are solutions to Maxwell's equations. Next, the computer may solve a nonlinear optimization problem of MR information and coefficients associated with the sample using the MR signals and a set of predetermined coil magnetic field basis vectors.
By representing the coil sensitivities and solving a non-linear optimization problem, this computational technique can reduce the MR scan time for measuring MR signals. For example, the operations performed by the computer may allow for skipping multiple MR scan lines in a measurement made by a measurement device and then reconstructing when solving a non-linear optimization technique. This capability may reduce the MR scan time associated with measurements performed by the measurement device, either alone or in addition to reducing the time required to solve the nonlinear optimization problem. Indeed, for a given coil set, field of view, external magnetic field strength (or resolution), and for 2D or 3D measurements, the computational techniques may achieve theoretical limits of possible acceleration in MR scan time. Thus, the computational techniques may reduce the cost of performing MR scans and may improve the overall user experience.
In a second set of embodiments, as previously discussed, existing MRI methods typically have a large number of MR scans and long MR scan times, as well as an enclosed environment of expensive magnets and/or magnet holes, which can degrade the user experience.
One approach to solving these problems is to use simulations of the physics of the response of a sample to one or more stimuli to determine information, such as one or more physical parameters. For example, using model parameters at the voxel level and a forward model based on one or more differential equations describing the physical phenomenon, the computer may use information specifying one or more excitations as inputs to the forward model to simulate the responsive physics of the sample as an output of the forward model.
However, this approach often replaces the problem of having a large number of MR scans and long MR scan times with the problem associated with accurately determining the model parameters at the voxel level. For example, model parameters are typically inverse problems by iteratively applying one or more stimuli, performing measurements, and then solving the inverse problem that uses the measurements to calculate the corresponding model parameters until the desired accuracy of the simulated response physics is achieved (sometimes referred to as an "iterative method"). In general, determining model parameters using these prior techniques can be difficult, time consuming, and expensive, which can constrain or limit the use of simulations responsive to physics to characterize a sample.
In a second set of embodiments, a system for determining model parameters associated with a sample is described. During operation, the system may apply an excitation to the sample using the source. The system may then measure a response to the stimulus associated with the sample using the measurement device. Further, the system may use the measured response and information specifying the excitation as inputs to a predetermined predictive model to calculate model parameters on a voxel-by-voxel basis in a forward model having a plurality of voxels representing the sample. The forward model may simulate the physics of responses to a given excitation occurring within the sample. Further, the forward model may be a function of the excitation, model parameters for a plurality of voxels, and differential or phenomenological equations that approximately respond to physics. Next, the system may use a processor to determine the accuracy of the model parameters by comparing at least the measured response and the calculated predicted values using the forward model, the model parameters, and the response of the excitation. Further, when the accuracy exceeds a predefined value, the system may provide the model parameters as output to a user, another electronic device, a display, and/or a memory.
By determining model parameters for voxels in the sample (sometimes referred to as "tensor field mapping" or TFM, since the parameters in the voxels may be represented by a mixed tensor rather than the true tensor for the vector field), this computational technique may reduce or eliminate the need for iterative measurements and adaptation when determining the model parameters. Thus, computational techniques may significantly reduce the use of system resources (e.g., processor time, memory, etc.) in determining model parameters. Furthermore, if accuracy is not sufficient (e.g., when accuracy is less than a predefined value), computational techniques may be used to direct modification of the excitation to facilitate rapid convergence of the model parameters with the desired accuracy. Furthermore, by providing a forward model that predicts a physical phenomenon based on model parameters for a determined excitation value or intensity range, the computational techniques may facilitate rapid and accurate characterization of the sample (e.g., determination of one or more physical parameters of the sample). Accordingly, computational techniques may be used to dynamically adapt or modify the excitation used in the measurement and/or may facilitate improved sample characterization.
These functions may result in shorter MR scan or measurement times, increased throughput, thereby reducing measurement costs, improving user experience (e.g., by reducing the time one spends in the closed environment of the magnet bore in the MR scanner), and greater use of characterization techniques. In addition, computational techniques can facilitate quantitative analysis of measurements, which can improve accuracy and reduce errors, thereby improving human health and well-being.
In general, computational techniques can be used in conjunction with a variety of characterization techniques and forward models that quantitatively simulate the physics of responses occurring within a sample to a given excitation. For example, the characterization technique may involve: x-ray measurements (such as x-ray imaging, x-ray diffraction or computed tomography), neutron measurements (neutron diffraction), electron measurements (such as electron microscopy or electron spin resonance), optical measurements (such as optical imaging or optical spectroscopy to determine complex refractive indices at one or more wavelengths), infrared measurements (such as infrared imaging or infrared spectroscopy to determine complex refractive indices at one or more wavelengths), ultrasound measurements (such as ultrasound imaging), proton measurements (such as proton scattering), MR measurements or MR techniques (such as MRI, MR spectroscopy or MRs with one or more types of nuclei, magnetic resonance spectroscopy imaging or MRSI, MR elastography or MRE, MR thermometry or MRT, magnetic field relaxation measurements, diffusion tensor imaging and/or other MR techniques, such as functional MRI, metabolic imaging, molecular imaging, blood flow imaging, etc.), impedance measurements (such as electrical impedance at DC and/or AC frequencies) and/or susceptibility measurements (such as DC and/or AC frequencies) Magnetic susceptibility at frequency). Thus, the stimulus may comprise at least one of: an electromagnetic beam in the x-ray band (e.g., between 0.01 and 10 nm), a neutron beam, an electron beam, an electromagnetic beam in the optical band (e.g., between 300 and 800 nm), an electromagnetic beam in the infrared band (e.g., between 700nm and 1 mm), an acoustic wave in the ultrasonic band (e.g., between 0.2 and 1.9 mm), a proton beam, an electric field associated with an impedance measurement device, a radio frequency wave associated with an MR apparatus or scanner, and/or a magnetic field associated with a susceptibility measurement device. However, another non-invasive characterization technique (e.g., positron emission spectroscopy), combination therapy (e.g., proton beam therapy or proton implantation, radiotherapy, magnetically permeable nanoparticles, etc.), and/or a different range of wavelengths (e.g., ultraviolet wavelengths between 10 and 400 nm) may be used. In general, computational techniques can be used with a variety of stimuli, as long as forward models exist that describe the physics of responses to these stimuli, and can be used to "excite" spatial regions. In the discussion that follows, MR techniques are used as an illustrative example of a characterization technique.
Note that the sample may include an organic material or an inorganic material. For example, the sample may include: an inanimate (i.e., non-biological) sample, a biological living body (e.g., a human or animal, i.e., an in vivo sample), or a tissue sample from an animal or human (i.e., a part of an animal or human). In some embodiments, the tissue sample is previously removed from the animal or human. Thus, the tissue sample may be a pathological sample (e.g., a biopsy sample), which may be formalin-fixed paraffin-embedded. In the discussion that follows, a sample is a person or an individual, which is used as an illustrative example.
We now describe embodiments of the system. Fig. 1 presents a block diagram illustrating an example of a system 100. In system 100, source 110 selectively provides an excitation to sample 112, and measurement device 114 selectively measures sample 112 to measure a response of sample 112 to the excitation. In addition, the system 100 includes a computer 116. As further described below with reference to fig. 9, the computer 116 may include subsystems such as a processing subsystem, a memory subsystem, and a networking subsystem. For example, the processing subsystem may include a processor executing program instructions, the memory subsystem may include a memory storing program instructions, and the networking subsystem may include an interface transmitting instructions or commands to the source 110 and the measurement device 114 (e.g., one or more sensors), which receives measurements from the measurement device 114 and selectively provides determined model parameters.
During operation, a communication engine (or module) 120 in the computer 116 may provide instructions or commands to the source 110 via the network 118 (e.g., one or more wired and/or wireless links or interconnects), which may cause the source 110 to apply stimuli to the sample 112. Such excitation may have at least a wavelength and an intensity or flux. For example, the stimuli may include: electromagnetic radiation, radio frequency waves, particle beams, acoustic waves, magnetic fields and/or electric fields.
In some embodiments, the excitation may include an external magnetic field that polarizes one or more types of nuclei in sample 112, an optional gradient in the magnetic field, and/or a Radio Frequency (RF) pulse sequence (sometimes referred to as "measurement conditions" or "scan instructions"). Thus, the source 110 may include a magnet to apply an external magnetic field, an optional gradient coil to apply an optional gradient, and/or an RF coil to apply an RF pulse sequence.
The communication engine 120 may then provide instructions or commands to the measurement device 114 via the network 118, which may cause the measurement device 114 to perform measurements of the response of at least a portion of the sample 112 to the stimulus. Further, measurement device 114 may provide the measurement results to communication engine 120 via network 118. Note that the measurement device 114 may include: an x-ray detector, a neutron detector, an electron detector, an optical detector, an infrared detector, an ultrasound detector, a proton detector, an MR device or scanner, an impedance measurement device (such as a gel covered table in an MR device or scanner) and/or a magnetic sensitivity measurement device.
In some embodiments, the measurement device 114 may include one or more RF pickup coils or another magnetic sensor (e.g., magnetometer, superconducting quantum interference device, photoelectron, etc.) that measures a time-varying or time-domain electrical signal corresponding to the dynamic behavior of nuclear spins in one or more types of nuclei of at least a portion of the sample 112, or at least an average component of magnetization (sometimes referred to as a "magnetic response") corresponding to the aggregate dynamic behavior of the nuclear spins. For example, measurement device 114 may measure the transverse magnetization of at least a portion of sample 112 as it precesses in the xy plane.
Note that the measurement results provided by the measurement device 114 may be in addition to or different from the image. For example, the measurement results may be in addition to the MRI results. For example, the measurement may include or may correspond to free induction decay (e.g., one or more components thereof) of nuclear spins in the sample 112. Thus, in some embodiments, the measurements may not involve performing fourier transforms on the measured electrical signals (and thus may not be performed in k-space, and may not involve pattern matching in k-space, such as MR fingerprinting). However, in general, the measurement results may be specified in the time domain and/or the frequency domain. Accordingly, in some embodiments, various signal processing (e.g., filtering, image processing, etc.), noise cancellation, and transformation techniques (e.g., discrete fourier transform, Z transform, discrete cosine transform, data compression, etc.) may be performed on the measurements.
After receiving the measurement, an analysis engine (or module) 122 in the computer 116 may analyze the measurement. This analysis may involve determining a (possibly time-varying) 3D position (sometimes referred to as "3D registration information") of the sample 112 relative to the measurement device 114. For example, the alignment may involve performing a point set registration, such as using reference markers at known spatial locations. Registration may use global or local positioning systems to determine changes in the position of sample 112 relative to measurement device 114. Alternatively or additionally, the registration may be based at least in part on a change in larmor frequency and a predetermined spatial magnetic field inhomogeneity or a change in the magnetic field of the source 110 and/or the measurement device 114 (e.g., MR apparatus or scanner). In some embodiments, the analysis involves aligning the voxel with a desired voxel location based at least in part on the registration information, and/or resampling and/or interpolating the measured signal to a different voxel location, which may facilitate subsequent comparisons with previous measurements or results.
Further, analysis engine 122 may use the measurements to determine model parameters for a forward model having a plurality of voxels representing sample 112, and the forward model simulates the physics of the responses occurring in sample 112 to a given excitation over a range of possible excitations (i.e., the forward model may be more general than a model that determines a predicted response to a particular or specific excitation). Notably, for the appropriate model parameters for the voxels in sample 112, analysis engine 122 can use forward modeling to accurately and quantitatively simulate or compute the predicted response (e.g., the predicted component of magnetization) of sample 112 to the excitation. Note that the forward model may be based at least in part on or may use one or more differential equations or one or more phenomenological equations that approximate the response physics of sample 112 on a voxel-by-voxel basis. For example, the forward model may be based on or may use Bloch equations, Bloch-Torrey equations (and thus, the forward model may include kinetic simulations such as motion associated with respiration, heartbeat, blood flow, mechanical motion, etc.), total-Liouville calculations (such as a Liouville super matrix of interactions between two or more elements), total-Hamiltonian quantities, Maxwell equations (e.g., the forward model may calculate magnetic and electrical properties of the sample 112), thermodiffusion equations, Pennes equations, and/or another simulation technique that represents the physics of the response of the sample 112 to some stimulus. Because in some embodiments the assumptions behind the Bloch equations are invalid (e.g., the parallel and anti-parallel components of the magnetization are coupled, e.g., when the magnetization state is not reset prior to the RF pulse sequence), additional error terms may be added to the Bloch equations. Thus, the forward model is able to compute the dynamic (e.g., time-varying) state of the sample 112 in response to a possible stimulus or any stimulus within a range of stimulus values.
In some analytical methods, the computer 116 may determine the model parameters by iteratively modifying the model parameters associated with the voxels in the forward model until the difference between the predicted response and the measured dynamic magnetic response is less than a predefined value (e.g., 0.1%, 1%, 5%, or 10%). (note that the "inverse problem" begins with one or more result(s) or output(s) and then computes the input or cause this is the inverse of the "forward problem" which begins with the input and then computes one or more result(s) or output.) however, in this "iterative method", the source 110 may apply different stimuli repeatedly and the measurement device 114 may perform corresponding measurements repeatedly. Thus, iterative methods can be time consuming, expensive, and complex. Thus, the iterative approach may consume a significant amount of resources in the system 100 until the appropriate model parameters are determined.
As further described below with reference to fig. 2-5, to address these issues, one or more predetermined or pre-trained predictive models (such as machine learning models or neural networks, which may be specific to a particular sample or individual (e.g., the predictive model may be a personalized predictive model) may be used in the computational technology analysis engine 122 to compute model parameters at least in part on a voxel-by-voxel basis, for example, the analysis engine 122 may use measurements and information specifying an excitation as input to the predictive model, which provides model parameters associated with voxels as output. Such that the particular time at which the determined model parameters are measured is inherent to sample 112.
Note that the model parameters may include: spin lattice relaxation time T1(this is the time constant associated with the loss of signal strength, since the components of the nuclear-spin magnetization vector of a class of nuclei relax into a direction parallel to the external magnetic field), the spin-spin relaxation time T2(this is the time constant associated with the signal broadening during the relaxation of the component of the nuclear-spin magnetization vector of a class of nuclei perpendicular to the direction of the external magnetic field), the adjusted spin-spin relaxation time T2Proton or nuclear density (more generally, the density of one or more types of nuclei), diffusion (such as a component in the diffusion tensor), velocity/flow, temperature, off-resonance frequency, electrical conductivity or permittivity, and/or magnetic susceptibility or permittivity.
If a forward model of one or more predicted responses (e.g., simulated or predicted MR signals) and one or more stimuli of the sample 112 and corresponding measurements (e.g., differences between predicted responses and measurement results are less than a predefined value, such as 0.1%, 1%, 5%, or 10%, or when accuracy exceeds a predefined value) are simulated using subsequent simulations of these model parameters provided by the predictive model, a results engine (or module) 124 in the computer 116 may provide the determined model parameters, such as by providing an output to a user, another electronic device, a display, and/or a memory. In some embodiments, results engine 124 may output a tensor field map of sample 112 with model parameters in 3 dimensions x space, one time x up to N measurement dimensions, where each measurement may be a vector or scalar quantity.
Thus, when the accuracy exceeds a predefined value (such as 90%, 95%, 99%, or 99.9%), the model parameters can be calculated in a single pass without further iteration. Thus, model parameters having an accuracy exceeding a predefined value may be calculated (and thus faster) using fewer (or no) iterations with a predetermined prediction model compared to an iterative approach without a predetermined prediction model.
Alternatively, when the accuracy is less than a predefined value, the computer 116 may perform one or more iterations in which one or more different, modified, or revised excitations (e.g., different RF pulse sequences) are applied by the source 114 to the sample 112, and one or more corresponding additional measurements are performed by the measurement device 114. The computer 116 may use these one or more additional measurements to determine model parameters with an accuracy below a predefined value.
For example, the analysis engine 122 may use a second predetermined predictive model (such as a second machine learning model or a second neural network) to determine the modified stimulus. Notably, the second predictive model may output a modified excitation using information specifying the excitation and accuracy as inputs. The system 100 may then repeat the applying, measuring, calculating, and determining operations with the modified excitation rather than the excitation. Thus, the second predictive model may be trained based on residual differences between the predicted response and the measurements based at least in part on or may incorporate the excitation information in order to reduce or eliminate the residual differences in one or more subsequent iterations of operations performed by the system 100. In some embodiments, the second predictive model may modify sampling frequencies, characterization techniques, etc. to determine additional information (i.e., with an accuracy less than a predefined value) that allows the first predictive model to be used to determine model parameter convergence. In other words, the next perturbation or perturbation may be selected to minimize the error or difference over the super-dimensional space.
In some embodiments, when the accuracy is less than a predefined value, the training engine (or module) 126 in the computer 116 may: adding the stimulus and the measured response to a training data set; and using the training data set to determine a revised instance of the predictive model for subsequent use in determining the model parameters. Thus, the measurements performed by the system 100 may be selectively used in adaptive learning techniques to improve the predictive model, and thus the model parameters, for the determination of a range of stimuli (e.g., different values of wavelength and intensity or flux).
Using the model parameters and the forward model, the analysis engine 122 can simulate or predict the response of the sample 112 to any stimulus, such as any external magnetic field strength or direction (e.g., 0T, 6.5mT, 1.5T, 3T, 4.7T, 9.4T, and/or 15T, or time-varying direction, e.g., a slowly rotating external magnetic field), any optional gradient, any pulse sequence, any magnetic state or condition (e.g., where the magnetization or polarization of the sample 112 has not returned, reset, or re-magnetized to the initial state prior to measurement), and so forth. Thus, model parameters and forward models can be used to facilitate fast and more accurate measurements, such as: soft tissue measurements, morphological studies, chemical shift measurements, magnetization transfer measurements, MRS, measurements of one or more types of nuclei, Overhauser measurements, and/or functional imaging. For example, in embodiments where the computer 116 determines the model parameters concurrently with measurements performed on the sample 112 by the source 110 and the measurement device 114 (i.e., in real-time), the system 100 may be at less than T1Or T2Rapidly characterize one or more physical parameters (at a voxel level or an average level) of the sample 112 in any type of tissue. This capability may allow the system 100 to perform initial measurements to determine model parameters, and then use the determined model parameters to simulate or predict the MR signals to complete or fill in the ongoing measurements being performed by the system 100, so that results may be obtained faster (and thus, shorter MR scan times). Note that in some embodiments, the system 100 may determine the result (e.g., detect an abnormality or change in the sample 112) based at least in part on a quantitative comparison of previous results obtained on the sample 112, such as in previous MR scans (one or more) of the sample 112One) of the stored model parameters for the voxels in the sample 112 determined during the time period. Such comparisons can be facilitated by 3D registration information that allows for alignment of voxel locations in the sample 112 at different times. In some embodiments, the results are based at least in part on the physician's instructions, medical laboratory test results (e.g., blood tests, urine tests, biopsies, genetic or genomic tests, etc.), individual medical history, individual family history, quantitative tensor field mapping of voxel-related multidimensional data of sample 112 or other samples, impedance of sample 112, hydration level of sample 112, and/or other inputs.
Furthermore, as described further below with reference to fig. 6, in some embodiments, the analysis engine 122 may classify or segment one or more anatomical structures in the sample 112 using the determined model parameters and a third predetermined predictive model (e.g., a third machine learning model and/or a third neural network). For example, using a simulated or predicted response of sample 112 at the voxel level or model parameters determined at the voxel level, the third predictive model may output the location of different anatomical structures and/or may output classifications of different voxels (such as the type of organ, whether they are associated with a particular disease state, e.g., the type of cancer, the stage of cancer, etc.). Thus, in some embodiments, the third predictive model may be trained or may incorporate a classification of segmentation information based on a change based at least in part on a change (e.g., discontinuous change) in model parameters across a boundary between different voxels. Such capabilities may allow the analysis engine 122 to identify different anatomical structures (which may help determine model parameters) and/or diagnose or make diagnostic recommendations regarding medical conditions or disease states. In some embodiments, the classification or segmentation is performed before, simultaneously with, or after the determination of the model parameters.
In some embodiments, the training engine 126 may train the predictive model, the second predictive model, and/or the third predictive model, at least in part, using the simulation data set. For example, the training engine 126 may have generated a simulation dataset using a forward model, model parameter ranges, and excitation ranges. In this way, the simulation data may be used to accelerate the training of one or more predictive models.
Notably, since computational techniques may capture all relevant information during a measurement (e.g., an MR scan), a forward model may be used in an offline mode to manage a broad set of labeled data that includes a large number of possible scenarios (e.g., different measurement conditions). This database can then be used to train the predictive model. This capability may address the difficulty of obtaining accurately labeled, repeatable, and artifact-free MR data.
In conjunction with the generated data set, one or more predictive models may be used to select regularization that accelerates initial data acquisition and/or denoising. In addition, one or more predictive models may also be used to accelerate the simulation or reconstruction using the forward model. For example, the predictive model may provide initial model parameters for the forward model, which may reduce the number of iterations required for measurement and simulation to converge on a solution with an accuracy that exceeds a predefined value. Thus, if the initial model parameters produce a predicted response that is very different from the measurement results, this can be fed back into subsequent measurements and simulations to improve the model parameters, and thus the predicted response.
Furthermore, if there is a portion of the model parameter space that is not covered by the predictive model(s), new data points can be accurately generated and labeled to train the predictive model(s). Further, the predictive model(s) may be trained based on different metrics corresponding to different applications. For example, the predictive model(s) may be training to optimize the excitation used in the difference scenario (e.g., fast scan of asymptomatic population, high accuracy of specific tissue attributes, robustness to signal-to-noise ratio variations, different hardware deficiencies, etc.).
In some embodiments, the analysis engine 122 may run a neural network that determines first model parameters based at least in part on measured or simulated data, and may perform a brute-force non-linear numerical calculation to solve an inverse problem using the measured or simulated data to determine second model parameters. The difference between the first and second model parameters from these two "inverse solvers" can be used as an error in the neural network-based approach. Such an approach may allow neural network learning because the numerical approach may be able to provide real-time feedback to the neural network and backpropagate/update the weights in the neural network. This hybrid approach still does not require or require prior training, but can solve the inversion problem with the pattern matching advantages of large neural networks and the certainty and accuracy of analog/numerical techniques. The hybrid approach may help the neural network when it has different inputs than any of the examples used to train it. Similarly, the hybrid approach can be used to directly measure from the time domain to the model parameterization output (i.e., inverse problem output). In some embodiments, the hybrid approach is implemented using a generative countermeasure network (GAN).
Note that in some embodiments, the forward model may be independent of the particular MR device or scanner. Rather, the forward model may be, for example, individual-specific. The predicted response calculated using the forward model may be adjusted to include characteristics or features of the particular MR device or scanner, such as magnetic field inhomogeneity or spatial variations in the magnetic field, RF noise, the particular RF pickup coil or another magnetic sensor, changes in characteristics or features with external magnetic field strength or measurement conditions (such as voxel size), geographical location, time (due to e.g. magnetic storms), etc. Thus, the predicted response may be machine specific.
Although the foregoing discussion describes a computational technique using a single predictive model of the sample 112, in other embodiments, there may be multiple predictive models for the sample 112. For example, different predictive models may be used to determine model parameters for different portions of sample 112 (e.g., different organs or different types of tissues), and thus for different voxels. Thus, in some embodiments, different predictive models may be used to provide T in different types of tissues1And T2Values, as summarized in table 1.
Tissue of T1(s) T2(ms)
Cerebrospinal fluid 0.8-20 110-2000
White matter 0.76-1.08 61-100
Lime ash 1.09-2.15 61-109
Meninges 0.5-2.2 50-165
Muscle 0.95-1.82 20-67
Fat 0.2-0.75 53-94
TABLE 1
Further, although the system 100 is illustrated as having particular components, in other embodiments, the system 100 may have fewer or more components, two or more components may be combined into a single component, and/or the position of one or more components may be changed.
We now describe an embodiment of a method. Fig. 2 presents a flow diagram illustrating an example of a method 200 for determining model parameters associated with a sample. This method may be performed by a system (e.g., system 100 in fig. 1) or one or more components in a system (e.g., source 110, measurement device 114, and/or computer 116).
During operation, a source in the system may apply an excitation to the sample (operation 210), where the excitation has at least a wavelength and an intensity or flux. For example, the stimulus may comprise one of: electromagnetic radiation, radio frequency waves, particle beams, acoustic waves, magnetic fields and/or electric fields. Thus, the stimulus may comprise at least one of: an electromagnetic beam in the x-ray band, a neutron beam, an electron beam, an electromagnetic beam in the optical band, an electromagnetic beam in the infrared band, an acoustic wave in the ultrasonic band, a proton beam, an electric field associated with an impedance measuring device, a radio frequency wave associated with a magnetic resonance apparatus, and/or a magnetic field associated with a susceptibility measuring device.
A measurement device in the system may then measure a response to the stimulus associated with the sample (operation 212). For example, the measurement device may comprise at least one of: an x-ray detector, a neutron detector, an electronic detector, an optical detector, an infrared detector, an ultrasound detector, a proton detector, a magnetic resonance device, an impedance measurement device, and/or a susceptibility measurement device. Note that the measured response may comprise the time-domain response of the sample and may be in addition to or different from the image.
Further, the system may calculate model parameters on a voxel-by-voxel basis in a forward model having a plurality of voxels representing the sample using the measured response and the information specifying the excitation as inputs to a predetermined predictive model (operation 214). The forward model may simulate the physics of responses occurring within the sample to a given excitation of a given wavelength and a given intensity or a given flux, the excitation selected from a range of measurement conditions including excitation, wavelength and intensity or flux, and at least a different wavelength and at least a different intensity or a different flux. Further, the forward model may be a function of the excitation, model parameters for a plurality of voxels, and differential or phenomenological equations that approximately respond to physics.
Note that the predetermined predictive model may include a machine learning model and/or a neural network. In some embodiments, the predetermined predictive model comprises a personalized predictive model corresponding to the individual.
Next, the system may determine the accuracy of the model parameters by comparing at least the measured response and the calculated predicted value of the response using the forward model, the model parameters, and the excitation (operation 216).
Further, when the accuracy exceeds a predefined value (operation 218), the system may provide the model parameters (operation 220) as an output, for example, to a user, another electronic device, a display, and/or a memory.
Thus, when the accuracy exceeds a predefined value (operation 218), the model parameters may be calculated in a single pass without further iteration. Thus, model parameters having an accuracy exceeding a predefined value may be calculated using fewer iterations with a predetermined prediction model than in an iterative method without a predetermined prediction model.
Alternatively, when the accuracy is less than a predefined value (operation 218), the system may: calculating a modified excitation having at least a modified wavelength, a modified intensity, or a modified flux using the information specifying the excitation and accuracy as inputs to a second predetermined prediction model (operation 222); and repeating (operation 224) the applying, measuring, calculating, and determining operations with the modified excitation instead of the excitation. Note that the second predetermined predictive model may include a machine learning model and/or a neural network.
In some embodiments, the system optionally performs one or more optional additional or alternative operations. For example, when the accuracy is less than a predefined value (operation 218), the system may: adding the stimulus and the measured response to a training data set; and determining a revised instance of the predictive model using the training data set.
Further, the system may classify or segment one or more anatomical structures in the sample using the model parameters and the third predictive model. For example, the third predetermined predictive model may include a machine learning model and/or a neural network.
Further, the system may train the predictive model using the simulation dataset computed using the forward model, the model parameter ranges, and the excitation ranges.
FIG. 3 presents a diagram illustrating an example of communication between components in system 100 (FIG. 1). Notably, the processor 310 in the computer 116 may execute program instructions (p.i.)312 stored in the memory 314. When the processor 310 executes the program instructions 312, the processor 310 may perform at least some of the operations in the computing technology.
During computing techniques, the processor 310 may provide instructions 318 to an interface circuit (i.c.) 316. In response, interface circuitry 316 may provide instructions 318 to source 110, e.g., in one or more packets or frames. Further, after receiving the instructions 318, the source 110 may apply an excitation 320 to the sample.
The processor 310 may then provide instructions 322 to the interface circuit 316. In response, interface circuitry 316 may provide instructions 322 to measurement device 114, e.g., in one or more packets or frames. Further, after receiving the instructions 322, the measurement device 114 may measure a response 324 associated with the sample to the stimulus 320. Next, the measurement device 114 may provide the measurement response 324 to the computer 116, e.g., in one or more packets or frames.
After receiving the measurement response 324, the interface circuit 316 may provide the measurement response 324 to the processor 310. Then, using the measured response 324 and information specifying the excitation 320 as inputs to a predetermined predictive model, the processor 310 may calculate model parameters (M.P.)326 on a voxel-by-voxel basis in a forward model having a plurality of voxels representing the sample.
Further, processor 310 may determine the accuracy 328 of the model parameters by comparing at least the measured response 324 and the predicted value of the calculated response using the forward model, the model parameters 326 and the excitation 320. When the accuracy 328 exceeds a predefined value, the processor 310 may provide the model parameters 326 as, for example, an output to a user, another electronic device (via the interface circuit 316), the display 330, and/or the memory 314.
Otherwise, when the accuracy is less than the predefined value, the processor 310 may perform remedial action 332. For example, the processor 310 may: calculating a modified excitation using information specifying the excitation 320 and the accuracy 328 as input to a second predetermined prediction model; and repeating the applying, measuring, calculating, and determining operations with the modified excitation instead of the excitation 320. Alternatively or additionally, the processor 310 may: adding the stimulus 320 and the measured response 324 to the training data set; and determining a revised instance of the predictive model using the training data set.
We now describe embodiments of the predictive model. For example, the predictive model may include a machine learning model, such as a supervised learning model or an unsupervised learning technique (e.g., clustering). In some embodiments, the machine learning model may include: support vector machines, classification and regression trees, logistic regression, LASSO, linear regression, non-linear regression, pattern recognition, bayesian techniques, and/or another (linear or non-linear) supervised learning technique.
Fig. 4 presents a diagram illustrating an example of a machine learning model 400. In this machine learning model, the measurements 410, one or more corresponding excitations 412, and a weighted (using weights 408) linear or non-linear combination 416 of one or more errors 414 between the one or more measurements 410 and one or more predicted responses determined using the forward model, the current instance of model parameters for the voxels in the forward model, and the one or more excitations 412 are used to compute a revised instance of the model parameters 418. Thus, in some embodiments, the predictive model 400 is used in conjunction with a forward model to iteratively modify instances of model parameters until the accuracy of the predicted response is less than a predefined value (i.e., a convergence criterion is reached). However, in some embodiments, the machine learning model may be used to determine the model parameters in one pass (i.e., in an open-loop manner).
Alternatively or additionally, the predictive model may include a neural network. Neural networks are generalized function approximators. For example, techniques such as deep learning typically use the previous examples as input. In general, these machine learning models are not possible to determine the actual functions they attempt to approximate, because they have no reference points available for estimating prediction errors. In particular, neural networks are difficult to predict based on inputs that are very different from the example of training the neural network. In this regard, the neural network may be considered a lossy computational compression engine.
However, by training the neural network using various stimuli, measured responses, and corresponding model parameters, the neural network can provide model parameters (or initial estimates of the model parameters) for a forward model that simulates the physics of the sample's response to the stimuli. Since neural networks are efficient approximations/compressions, they can perform faster on the same input, requiring less computational power. Furthermore, since the functions are known in the forward model, the response can be calculated and the accuracy of the prediction evaluated (rather than using approximations). Thus, computational techniques may be used to determine when its prediction is unreliable. In particular, as previously discussed with respect to fig. 4, a neural network may be used in conjunction with the forward model to iteratively modify instances of the model parameters until the accuracy of the predicted response is less than a predefined value (i.e., a convergence criterion is reached). However, in some embodiments, a neural network may be used to determine the model parameters in one pass (i.e., in an open-loop manner).
Fig. 5 presents a diagram illustrating an example of a neural network 500. Such a neural network may be implemented using a convolutional neural network or a recurrent neural network. For example, the neural network 500 may include a network architecture 512 that includes: an initial convolutional layer 514 that provides filtering of the input 510 (e.g., one or more measurements and the difference or error between the one or more measurements and one or more predicted responses determined using the forward model, the current instance of the model parameters, and the excitation); additional convolutional layer(s) 516 to which weights are applied; and an output layer 518 (e.g., a corrected linear layer) that performs the selection (e.g., selecting a revised instance of the model parameters). Note that different levels of detail and their interconnections in the neural network 500 may define the network architecture 512 (e.g., directed acyclic graph). These details may be specified by instructions of the neural network 500. In some embodiments, the neural network 500 is reformatted as a series of matrix multiplication operations. The neural network 500 may be capable of handling real-world variances in 100 thousand or more inputs. Note that the neural network 500 may be trained using deep learning techniques or GAN. In some embodiments of the machine learning model 400 (fig. 4) and/or the neural network 500, the current instance of the model parameters is used as input.
In some embodiments, a large convolutional neural network may include 60M parameters and 650,000 neurons. The convolutional neural network may include eight learning layers with weights, including five convolutional layers and three fully connected layers with a final 1000 way softmax or normalized exponential function, which produces a distribution over the 1000 class labels for different possible model parameters. Some of the convolutional layers may be followed by a max pooling layer. To make training faster, convolutional neural networks may use non-saturated neurons (e.g., local response normalization) and efficient double-parallelized GPU implementations of convolutional operations. In addition, to reduce overfitting of fully connected layers, a regularization technique (sometimes referred to as "dropout") may be used. In dropout, predictions of different models are effectively combined to reduce test errors. In particular, the output of each hidden neuron is set to zero with a probability of 0.5. Neurons that "exit" in this manner do not participate in forward propagation, nor in backward propagation. Note that the convolutional neural network can maximize a multinomial logistic regression objective, which can be equivalent to the mean of the training cases that maximizes the log probability of the correct label under the prediction distribution.
In some embodiments, the kernels of the second, fourth, and fifth convolutional layers are coupled to those kernel maps in a previous layer that reside on the same GPU. The kernel of the third convolutional layer may be coupled to all kernel maps in the second layer. Furthermore, the neurons in a fully connected layer may be coupled to all neurons in a previous layer. Further, the response normalization layer may be after the first and second convolution layers, and the max pooling layer may be after the response normalization layer and the fifth convolution layer. A non-linear model of the neuron, such as a corrected linear unit, may be applied to the output of each convolutional layer and the fully-connected layer.
In some embodiments, the first convolution layer filters the 224 × 224 × 3 input image with 96 kernels of size 11 × 11 × 3 in steps of four pixels (this is the distance between the centers of the receptive fields of adjacent neurons in the kernel map). Note that the second convolutional layer may take as input the output (response normalization and pooling) of the first convolutional layer and may filter it with 256 kernels of size 5 × 5 × 48. Further, the third, fourth, and fifth convolutional layers may be coupled to each other without any intermediate pooling or normalization layers. The third convolutional layer may have 384 cores of size 3 × 3 × 256 coupled to the (normalized, pooled) output of the second convolutional layer. Further, the fourth convolutional layer may have 384 cores of size 3 × 3 × 192, and the fifth convolutional layer may have 256 cores of size 3 × 3 × 192. The fully connected layers may have 4096 neurons each. Note that the numerical values in the foregoing and the remaining discussion below are for illustrative purposes only, and different values may be used in other embodiments.
In some embodiments, the convolutional neural network is implemented using at least two GPUs. One GPU may run some of the tier components while the other runs the remaining tier components, and GPUs may communicate at certain tiers. The input to the convolutional neural network may be 150,528D, and the number of neurons in the remaining layers of the convolutional neural network may be represented by 253, 440-.
We now describe embodiments of the forward model. This forward model may be a 3D model of voxels in a portion of the sample (e.g., an individual) and may include model parameters in the Bloch equation for each voxel. In particular for quasi-static magnetic fields B along the z-axis0The Bloch equation is
Figure BDA0003545574840000161
Figure BDA0003545574840000162
And is
Figure BDA0003545574840000163
Wherein gamma is the gyromagnetic ratio,
Figure BDA0003545574840000164
represents a cross product of vectors, an
Figure BDA0003545574840000165
Is the magnetic field experienced by a type of nucleus in the sample. The model parameters in the Bloch equation may include T1、T2Density, diffusion, velocity/flux, temperature, magnetic susceptibility of a type of nucleus, etc. It should be noted that for each voxel, different types of kernels may have different model parameters. Further, note that the Bloch equation is a semi-classical, macroscopic approximation of the dynamic response of the magnetic moments of the core type in the sample to a time-varying magnetic field. For example, at 1mm367M cells may be present in the voxels.
In principle, the solution space of the model parameters in the Bloch equations of the sample can be underestimated, i.e. the model parameters to be determined can be much more than observed values for specifying or constraining the model parameters. Thus, when training the predictive model or using the predictive model to determine model parameters (e.g., using machine learning models or calculations in the neural network layer), the computational techniques may utilize additional information to constrain or reduce the dimensionality of the problem. For example, other imaging techniques, such as computed tomography, x-ray, ultrasound, etc., may be used to determine aspects of the anatomy of the sample. Furthermore, regions that appear dissimilar to the target type of tissue (e.g., cardiac tissue) (i.e., regions with very different measurements, e.g., different measured MR signals) may be excluded from the forward model (e.g., by setting model parameters in these regions to zero). In this way, for example, regions consisting of air can be excluded. Other constraints in the forward model may include: thermodynamic constraints on heat flow (from hot to cold) for perfusion or MRT to quantify metabolism. Furthermore, different pulse sequences and/or different MR techniques may be used, at different magnetic field strengths B0(which may provide similar information as a pseudo-random pulse sequence) to train the predictive model, which may reduce the ratio of model parameters to observed values, thereby simplifying the training of the predictive model.
Alternatively or additionally, tissues that deviate significantly from predicted or simulated responses (such as predicted MR signals) based on previous MR measurements or scans (e.g., anomalies or changes) may be the focus of the forward model, such as by using contour maps (e.g., cubic splines) to define regions (or boundaries of specified regions) where there is a significant difference. In some embodiments, when training a predictive model or determining model parameters using a predictive model (e.g., using a machine learning model or calculations in a neural network layer), the difference or error between the measurement and the simulated or predicted response may be represented using one or more level set functions, and the boundary of the region where the error exceeds the threshold may be determined based on the intersection of the plane corresponding to the threshold and the one or more level set functions.
In some embodiments, layers in the neural network may compute first and second derivatives along the surface(s) of the model parameter solution in the sample. (to facilitate calculation of the derivative, one or more level set functions may be used to represent the model parameters.) A set of voxels along a line where the first derivative is zero may be identified. This set of voxels may be fitted using cubic splines and the error between the voxel locations and the cubic splines is minimal. This fitting operation may be repeated at all boundaries in the model-parameter-solution space. Further, the largest continuous surface within the boundary defined by the cubic spline may be determined, and the model parameter solution calculations may be repeated to determine a new continuous surface within the previous continuous surface. Such a generic framework may minimize errors within the voxel volume, thereby improving consistency between measurements and forward model-based simulated or predicted responses.
For example, the neural network may solve the inverse problem using a jacobian matrix of model parameters and newton's method for voxels in the forward model to modify the model parameters for voxels in successive layers based on how perturbations in the model parameters affect the difference or error between the measurement and the predicted response.
In some embodiments, if a portion of the sample includes one voxel, there may be 4-10 model parameters (which specify a forward model) that need to be determined for a particular type of tissue. If the voxel comprises M types of tissue, there may be 4M-10M model parameters that need to be determined for a particular type of tissue. This seems to be a daunting problem as the number of voxels increases.
However, since different types of nuclei have different larmor frequencies, the spatial distribution of the nuclei type and their local concentration can be determined by measurement. The predefined anatomical template of the human body (or a part of the human body) and the associated initial model parameters for the forward model may then be scaled to match the spatial distribution of the kernel-types and their local concentrations. For example, predetermined or predefined ranges of model parameters in different types of tissues may be used to determine the range of initial model parameters. In some embodiments, the initial model parameters are based on model parameters associated with previous measurements or MR scans.
Next, a look-up table with simulated or predicted responses (generated using one or more forward models) as a function of the associated model parameters and excitation may be used to modify the initial model parameters or calculate the model parameters for voxels in the sample. For example, simulated or predicted responses similar to measurements may be identified, and differences or errors between these simulated or predicted responses and measurements may be used to guide interpolation between model parameters in the lookup table.
In some embodiments, for one type of tissue (such as a particular organ), the model parameters determined using different layers in the neural network may be iteratively refined as the size of voxels in different layers gradually decreases (and thus, the number of voxels increases). This analysis may be driven by the error between the measurements and the simulated or predicted response using the forward model. By proceeding through successive layers in the neural network, focus can be focused on regions of the residual where the error is greater than a convergence or accuracy criterion. For example, the model parameters of the forward model in one layer of the neural network may be based on measurements at one magnetic field strength, and then the error may be determined based on the predicted response of the forward model at another magnetic field strength. Further, note that the initial predictive model or forward model may assume no contribution or interaction between different voxels. However, as errors and voxel sizes decrease, such contributions and/or interactions may be included in subsequent layers of the neural network. In some embodiments, when multiple candidate model-parameter solutions (with similar errors) exist for an inverse problem for one layer in a neural network, at least some of these candidates may be retained for use in subsequent layers (i.e., a unique model-parameter solution may not be identified at this point). Alternatively, if there is no unique parameter solution within a desired error range (e.g., less than 50%, 25%, 10%, 5%, or 1%), the best (error-minimized) model parameter solution may be retained. Further, when there is no model parameter solution within the expected error range, the excitation may be modified using the second predictive model, and additional measurement(s) may be performed.
Thus, inverse problems that determine model parameters based on measurements may be "solved" using a predictive model that provides model parameters that minimize errors or differences between the measurements and simulated or predicted responses generated based on the forward model, the model parameters, and the excitation. In some embodiments, one or more analytical techniques are used to solve the inverse problem, including: least squares, convex quadratic minimization techniques, steepest descent techniques, quasi-newton techniques, simplex techniques, Levenberg-Marquardt techniques, simulated annealing, genetic techniques, graph-based techniques, another optimization technique, and/or kalman filtering (or linear quadratic estimation).
Note that the training of the predictive model may use dynamic programming. In particular, the training problem may be partitioned and executed in parallel by multiple computers, for example, in a cloud-based computing system. For example, a particular thread may attempt to solve an inverse problem under particular measurement conditions. Multiple potential model parameter solutions generated by a computer (or processor) may be combined (e.g., using linear superposition) to determine an error metric that is minimized using one or more analysis techniques.
Further, as previously described, the inverse problem may be solved iteratively by a predictive model (such as a machine learning model or a neural network) by first attempting to find suitable model parameters for the forward model using a coarse voxel size (e.g., model parameters that minimize the error between the measurement and the simulated or predicted response, and then finding suitable parameters with smaller voxel sizes step by step in subsequent layers or stages of the computation. Note that the final voxel size (or suitable range of voxel sizes, as the voxel size may not be fixed in some embodiments) used during this iteration may be determined based on the gyromagnetic ratio of the type of core being scanned. further,the voxel size or location may also be selected such that the voxels are evenly divided into sets of sub-voxels, or such that there is some amount of overlap with the preview voxel size, to effectively "oversample" the overlapping region, and possibly further localize the source of the MR signal. The last technique may be similar to moving the entire gradient system in one or more dimensions by a distance dx that is less than the characteristic length of the voxel (e.g., the length, width, or height of the voxel). In some embodiments, the voxel size in the predictive or forward model is smaller than the voxel size used in the measurements (i.e., the predictive or forward model may use super-resolution techniques). For example, there may be 512 × 512 voxels or 1024 × 1024 voxels at a magnetic field strength of 3T. Note that the voxel size may be less than 0.253mm3
We now describe embodiments of techniques for segmenting different types of tissue that can be used by a third predictive model (e.g., a neural network). Dictionary D of time-sampled MR trajectories (or vectors) defining measurements for different types of tissue dj (1 to n for j) in a multidimensional parameter spacemrSo that the measured MR signal y of the voxelobvCan be expressed as
Figure BDA0003545574840000191
Wherein alpha isjIs a normalized weight (i.e.,
Figure BDA0003545574840000192
) And e is the error (i.e., e is (y) for j 1 to njj). This may define an intra-voxel linear equation problem. The generalized inter-voxel problem may model a set of voxels (e.g., a cube with 27 voxels) as a graph G. Note that each voxel in the set may have 26 edges to 8 neighboring voxels. The parametric solution to the inverse problem can be defined as the solution that minimizes the error.
Consider the case of two neighboring voxels u and v. Linear equation U within voxelyAnd VyA solution at u and v is required. There are several possible outcomes. First, UyAnd VyThere may be a unique model parameter solution (where the "unique model parameter solution" may be the best fit to the existing forward model, i.e., having an error or difference vector less than a convergence or accuracy criterion), and the analysis may be completed. Alternatively, UyCan have a unique model parameter solution, but no Vy。UyThe solution of the model parameters in (1) is possible to VyApplying constraints such that VyWith a single model parameter solution, the analysis can be done in this case. However, UyAnd VyThere may not be a unique model parameter solution, in which case the system of combined equations (i.e., effectively increasing the voxel size) may produce a unique model parameter solution. In addition, UyAnd VyThere may not be any model parameter solution, in which case the intra-voxel problem cannot be solved without further constraints.
In the last case, it may be necessary to solve the linear equation U within the voxel with correspondences at U, v and wy、VyAnd WyI.e. the sequence voxels u, v and w. Note that linear equation V within a voxelyAnd WyTo simplify to the former case. When the intra-voxel linear equations are not reduced to the previous case, this analytical operation may be applied recursively until it does so, and then the intra-voxel linear equations may be solved as described previously.
In general, such analytical techniques can be isomorphic to fit a 3D surface (or volume) to minimize the problem of errors. One challenge in this respect is that it assumes that all neighboring volumes contribute to the model parameter solution a that minimizes the errorjWith the same effect.
The minimization of the error may initially assume that there is no inter-voxel contribution (i.e., voxels are independent). Subsequently, inter-voxel contributions may be included. In particular, two different categories exist, considering the neighboring voxel volumes. Volumes that share a curved surface and volumes that share only a 1D edge. The minimization function can be improved by weighting the error contribution at voxel u relative to the center of the coordinate system. If the influence on the error is related to r-2Proportional (where r is the distance between the voxel center points)Off), and assuming 1mm isotropic voxels in the weights, the minimization or fitting problem with inter-voxel contributions can be expressed as
Figure BDA0003545574840000201
Where the sum of k is for neighboring voxels that share a common surface (i.e., -1,0,0), (0, -1,0), (0,0, -1), and (0,0,1)) and the sum of l is for the remainder of neighboring voxels that share a common edge. The assumption in the analysis is that the most difficult place to fit or determine the model parameter solution is the discontinuity or interface between different tissues. Thus, during the computational technique, the analysis engine 122 (fig. 1) may first solve for these locations, and then may solve for the remaining locations.
Alternatively, r is due to the magnetic contribution from neighboring voxels2Proportional, and therefore in the minimization problem, given a sphere of radius R from the center of the main or central voxel, the surrounding voxels can be weighted based on how far the sphere extends into the volume of neighboring voxels (and therefore, based on the estimate of the intensity of their inter-voxel contributions). For example, three different weights may need to be specified, including: the weight of voxels sharing a 2D surface, the weight of voxels sharing a 1D line, and the weight of voxels sharing a 0D point. Because there may not be a uniform distribution of tissue within each voxel, the weights may be dynamically adjusted to model different types of distributions within each voxel in order to find a distribution that minimizes the error. This may provide the ability to identify multiple MR features within a single voxel for different types of tissue. Note that as computing power increases, the accuracy of the third predictive model may increase, and the analytical techniques used to solve the minimization problem (and thus, the inverse problem) may be modified.
Thus, in embodiments where the forward model of a voxel depends on the forward models of surrounding or neighboring voxels, the forward model of the voxel may be computed using the 2 nd or nth order effect. For example, if there are N order 1 forward models (where N is an integer), there may be up to N! L (N-27)! A 2 nd order forward model (if all voxels interact with each other). In some embodiments, locality is used to simplify the inverse problem. In this way, a forward model can be generated by combining how forward models of neighboring voxels affect the forward model in the main (central) or 1 st order voxel.
In some embodiments, dithering techniques are used to overcome arbitrary locations of voxels relative to the distribution of tissue types in the body. In particular, due to arbitrary voxel placement or current voxel size, two or more types of tissue may be present in a voxel. This may significantly change the forward model parameters for that voxel. This may indicate that more than one forward model is required for the voxel. To confirm this, the voxels may be displaced by a distance dx (which is a fraction of the voxel length, width, or height), and the forward model parameters may be determined again (e.g., using a predictive model). In these processes, the tissue distribution can be determined. Thus, this approach can effectively increase spatial resolution in the analysis without changing the voxel size.
Fig. 6 presents a diagram illustrating an example of classification or segmentation of one or more anatomical structures 600. Notably, FIG. 6 illustrates a graph based at least in part on T at a voxel boundary1And T2To identify or segment the organ 610.
Although the foregoing discussion describes computational techniques using MR techniques, this approach can be generalized to measurement systems that are capable of physically modeling and measuring samples in real time using various characterization techniques. In general, computing techniques may use mechanical and/or electromagnetic waves in combination to "perturb" or "excite" a scanned volume in order to evaluate the correctness of predictions based on how the volume responds to the perturbation. This also includes the ability of the system to simulate itself and any portion of the environment in which the system is located that may affect the accuracy or correctness of the forward model that the system is attempting to generate to describe the volume being scanned or measured.
Note that different characterization techniques can provide tensor field mapping and the ability to detect tensor field anomalies. These mappings may be image or quantitative tensor field mappings, and each characterization technique may provide a visualization of different types of tensor field mappings captured with different types of measurements. By looking at or considering two or more of these mappings, the system can access the orthogonal information.
Thus, the system may provide a way to capture high-order or super-dimensional pseudo-or mixed tensors or matrices at each voxel in 3D space in real-time or near real-time. Using electromagnetic and/or mechanical perturbations or excitations, the system may use different characterization techniques to measure the perturbations and responses, and then simulate the response.
The result of this characterization may be a (4+ N) D (three spatial dimensions, one temporal dimension, and up to N measurement dimensions at each point in space) quantitative model of the scanned volume. Note that the (4+ N) D quantitative model may be projected onto any subset of the full (4+ N) D space, including 2D or 3D images.
In some embodiments, the use of multi-dimensional data and models provides enhanced diagnostic accuracy (i.e., lower false positive rate) relative to conventional MRI methods, even with larger voxel sizes. Thus, computational techniques can improve diagnostic accuracy at larger voxel sizes (or weaker external magnetic fields) than required for conventional MRI. However, as previously described, the computational techniques may be used with various measurement techniques, either separately from or in addition to MRI.
In some existing MR scanners, multiple receive channels (with receivers and associated antennas) are used to speed up or reduce the time required to perform an MR scan. These methods are sometimes referred to as "MRI parallel imaging.
Notably, the gradient coils in the MR scanner phase encode the MR signals (in time), which allows the output MR signals to be distinguishable from each other. Furthermore, when there are multiple receive channels, there is redundancy in the collected phase encoded MR signals. In principle, by utilizing different phase profiles, the redundancy allows skipping some phase encoded MR signals (like some of the MR scan lines) and subsequently reconstructing from other phase encoded MR signals, thereby speeding up the MR scan time.
For example, for 2D space, during an MR scan, RF pulses can be applied, and then the gradient coils in x and y can be turned on and the MR scan lines in k-space can be read out. These operations (applying RF pulses and reading out MR scan lines) can then be repeated multiple times for additional MR scan lines (with different phase encoding) until, for example, 256 MR scan lines are read out. By using, for example, 32 receive channels and skipping measurements of some of these MR scan lines, the MR scan time can be reduced by, for example, a factor of 2 or 3.
Note, however, that the reduction in MR scan time is not a linear function of the number of receive channels. This is because in many MRI parallel imaging techniques additional information is required to reconstruct skipped MR scan lines. Thus, the number of MR scan lines is reduced by less than the number of receive channels, or a separate pre-scan is used to acquire additional information.
Notably, there are two main categories of existing MRI parallel imaging techniques. The first category of methods (referred to as "SENSE", "ASSET", "RAPID", or "SPEEDER") are image-domain based methods after reconstruction of the MR signals from individual RF pickup coils or antennas (sometimes referred to as "coils") in the receive channel. In this approach, the number of MR scan lines dropped or skipped may be equal to the number of receive channels. However, a separate pre-scan is used to determine the coil sensitivities (or coil sensitivity maps) of the receive channels. This is because the MR signals measured using a given receive channel during an MR scan correspond to the volume integral of the product of the coil sensitivity of the given receiver channel and the time-dependent magnetization of the sample. Furthermore, because the polarized magnetic field received by a coil or antenna in a given receive channel depends on its position and orientation, in general, each of the coils or antennas in the receive channel has a different coil sensitivity. By performing a pre-scan, the coil sensitivity can be predetermined. Next, in the image domain, sample properties (such as spatially varying proton density) may be accounted for or presented.
Thus, in existing MRI scanners, a first class of methods may involve the following operations: the method includes generating coil sensitivity maps, acquiring partial k-space MR data, reconstructing partial field-of-view images from each coil, and unfolding/combining the partial field-of-view images using matrix inversion. Therefore, note that the first category of methods is redefined as a linear problem and may be solved in part using fourier and inverse fourier transforms.
The second category of methods (which are referred to as "GRAPPA") are k-space based methods. Such methods may not use a pre-scan to determine coil sensitivities. Instead, additional or additional MR scan lines may be acquired near k in k-space equal to zero. By using the smoothness of these so-called "auto-calibration lines" around k equal to zero, missing (skipped) MR scan lines can be calculated (e.g. by interpolation using auto-calibration lines).
Thus, in existing MR scanners, the second type of method may involve reconstructing the fourier plane of the image from the frequency signals of each coil (i.e. the reconstruction in the frequency domain). Also, note that the second category of methods is redefined as a linear problem and may be solved in part using fourier and inverse fourier transforms.
Furthermore, there are some other (less common) methods for MRI parallel imaging. Notably, coil sensitivities and sample properties (such as spatially varying proton density) can be determined simultaneously (rather than, for example, using a pre-scan) in a joint reconstruction. For example, in principle, coil sensitivities and spatially varying proton densities can be calculated from the MR signals by solving a non-linear inversion or inversion problem. However, this nonlinear optimization problem is typically ambiguous (e.g., there is no unique solution because it is ambiguous, having more unknowns than the measured MR signal can specify).
One method of solving the nonlinear optimization problem is to use a hypothetical regularizer to constrain the optimization. For example, the coil sensitivities may be assumed to be smooth. This constraint may allow a solution to be obtained, but in general, analysis times are often very long.
Another approach to solving the non-linear optimization problem is to assume that the coil sensitivities can be represented as linear superpositions of polynomial functions. However, the extension of this assumption is usually pathological. Notably, it may be difficult to solve a nonlinear optimization problem with polynomial functions of higher order than quadratic.
In embodiments of the disclosed computational technique, the nonlinear optimization problem can be solved without assuming that the coil sensitivities are smooth, are linear superposition of polynomial functions, or have any predefined closed form of functional representation. In contrast, the coil sensitivity may be a solution to maxwell's equations in the field of view of the MR device at a given external magnetic field strength (i.e., maxwell's equations may be satisfied and thus may not be an approximation). In addition to being physically accurate, the resulting coil sensitivities may allow the non-linear optimization problem to be solved much faster than existing non-linear optimization methods. This capability can significantly reduce MR scan time, either alone or in combination with skipped MR scan lines.
Furthermore, because the disclosed computing techniques (which are sometimes referred to as "maxwell parallel imaging") do not involve the use of pre-scans to determine coil sensitivities or measurements of auto-calibration lines, maxwell parallel imaging may be significantly faster than the first and/or second classes of methods previously described for MRI parallel imaging. For example, the MR scan time in the case of maxwell parallel imaging may be at least 2 to 4 times faster than these existing classes of approaches, for example. In fact, maxwell parallel imaging may achieve theoretical limits of possible acceleration in MR scan time for a given coil set, field of view, external magnetic field strength (or resolution), and for 2D or 3D measurements.
Note that maxwell parallel imaging can be used to speed up MR scan time with qualitative or quantitative MR measurements. Thus, maxwell parallel imaging may be used with MRI, MR fingerprinting, tensor field mapping, and/or another MR measurement technique.
In general, the solution to maxwell's equations for coil sensitivity is a circularly polarized magnetic field. These coil magnetic fields can be generated off-line (i.e. not during the MR scan) in the field of view of the MR device using numerical simulation. For example, the coil magnetic field may be calculated by the distribution of currents (e.g. dipoles) on the surface around the field of view in the MR device. In some embodiments, tens of thousands or more random currents may be present on the surface.
However, due to the low frequency (the precession frequency of protons in an external magnetic field of 1.5T is 63.87MHz) and near field conditions, the currents on the surface may be similar to each other. Thus, there may be a set of coil magnetic field basis vectors that encompass or include most of the energy or power in different coil magnetic fields. For example, singular value decomposition or eigenvalue decomposition techniques may be used on different numerically simulated coil magnetic fields to determine a set of coil magnetic field basis vectors. The given coil magnetic field (and thus the given coil sensitivity) may then be a linear superposition of a set of coil magnetic field basis vectors. In some embodiments, the set of coil magnetic field basis vectors may include, for example, 30 coil magnetic field basis vectors. Also, note that the coil field basis vectors may each be a solution of Maxwell's equations. Alternatively, in some embodiments, the coil magnetic field basis vectors may each be an approximation of a solution of maxwell's equations (e.g., within 85%, 95%, or 99% of a solution of maxwell's equations).
By using a set of coil magnetic field basis vectors, the nonlinear optimization problem can be physically "regularized" and solved in much less time. For example, if no regularization assumption is made, a nonlinear optimization problem for a 2D MR scan with 12 coils and 256-bit Fourier transform resolution may involve solving 2562+12·2562And (4) unknown parameters. The first term unknowns correspond to, for example, unknown proton density, and the second term unknowns correspond to unknown coil sensitivities. As previously mentioned, this problem is ill-posed, so there is no unique solution and various approximations or assumptions have been used in some prior approaches.
In contrast, in maxwell parallel imaging, the nonlinear optimization problem does not solve for unknown coil sensitivities, but rather determines the coefficients of the different coils in a weighted linear superposition of a set of coil magnetic field basis vectors. Thus, a non-linear optimization problem for a 2D MR scan with 12 coils, 30 coil magnetic field basis vectors, and 256-bit Fourier transform resolution may involve solving for 2562+12 · 30 unknown parameters. Thus, Maxwell parallel imaging can solve for example unknown proton density and unknown coil sensitivity much faster (than prior methods) because instead of solving for unknown coil sensitivity, Maxwell parallel imaging simultaneously calculates the coefficients of a set of coil magnetic field basis vectors and for example proton densityAnd (4) degree.
Note that in maxwell parallel imaging, a given coil sensitivity may be represented by or equal to a weighted superposition of a set of coil magnetic field basis vectors (i.e., a linear superposition of the products of the coefficients and the corresponding coil magnetic field basis vectors). Further, note that maxwell parallel imaging can determine coil sensitivities more accurately, as ultimately it can involve solving the understanding of maxwell's equations (a set of coil magnetic field basis vectors) without assumptions. Furthermore, while the weighted superposition of the set of coil magnetic field basis vectors may be an approximation of a given coil sensitivity, it may be a more accurate physical representation.
In maxwell parallel imaging, a non-linear optimization problem may involve iteratively solving (e.g., minimizing) a constrained data fidelity term (the squared absolute value of the difference of the MR signals minus the estimated MR signals). Note that the data fidelity term may incorporate or include contributions from coil sensitivities (e.g., a weighted superposition of a set of coil magnetic field basis vectors). Further, note that constraints may include: the structure of the spatial distribution of protons or nuclear density (and more generally MR parameters such as nuclear density, relaxation time, etc.), the overall change in proton density (or MR parameter), and/or another suitable regulariser of proton density (or MR parameter). In general, the regularization terms on proton density (or MR parameters) may correspond to those used in image processing. Therefore, a regularization term on proton density (or MR parameters) may avoid the L2 norm or smoothing criterion.
In some embodiments, a non-linear optimization problem may be solved using a predefined or pre-trained neural network or a predefined or pre-trained machine learning model. In these embodiments, the coil sensitivities may likewise be represented by a weighted superposition of the set of coil magnetic field basis vectors.
Fig. 7 presents a flow chart illustrating an example of a method 700 for determining coefficients in a representation of coil sensitivities and MR information associated with a sample. This method may be performed by a system (e.g., system 100 in fig. 1) or one or more components in a system (e.g., source 110, measurement device 114, and/or computer 116).
During operation, the computer may acquire MR signals from or associated with the sample (operation 710). This may involve having the MR device apply external magnetic fields, gradient magnetic fields and/or one or more RF pulse sequences and measure the MR signals using a receiver or reception channel. Alternatively or additionally, the computer may access MR signals stored in memory that were previously acquired by the MR apparatus or measurement device. Note that the MR device may be located remotely from the computer, or may be close to the computer (e.g., at a common facility).
The computer may then access (e.g., in memory) a set of predetermined coil magnetic field basis vectors (operation 712), where the weighted superposition of the set of predetermined coil magnetic field basis vectors may represent coil sensitivities of coils in the MR device. For example, a given coil sensitivity may be represented by a linear superposition of coefficients and products of predetermined coil magnetic field basis vectors of a set of predetermined coil magnetic field basis vectors. Note that each of the predetermined coil magnetic field basis vectors may be a solution of maxwell's equations.
Next, the computer may solve a non-linear optimization problem of the MR information and coefficients associated with the sample using the MR signals and the set of predetermined coil magnetic field basis vectors (operation 714). For example, the computer may reduce or minimize the term corresponding to the squared absolute value of the difference between the MR signal and the estimated MR signal. The term may comprise or may incorporate contributions from the coil sensitivities of the coils in the MR device. For example, a given coil sensitivity may be represented by a linear superposition of a set of predetermined coil magnetic field basis vectors, where the weights may include coefficients for each of the predetermined coil magnetic field basis vectors. Furthermore, the estimated MR signals may correspond to MR information specified by the MR signals (such as the spatial distribution of one or more MR parameters in a voxel, e.g., proton or nuclear density, relaxation time, etc.). Further, the nonlinear optimization problem may include one or more constraints on the reduction or minimization of the term, such as one or more constraints corresponding to the spatial distribution of the one or more MR parameters (e.g., regularizers corresponding to the one or more MR parameters).
In some embodiments, the non-linear optimization problem is solved iteratively (e.g., until a convergence criterion is reached). However, in other embodiments, the non-linear optimization problem is solved using a pre-trained neural network or a pre-trained machine learning model that maps the set of MR signals and coil magnetic field basis vectors to spatial distributions and coefficients of one or more MR parameters (e.g., in voxels). Thus, in some embodiments, the nonlinear optimization problem may be solved without iteration.
Furthermore, in some embodiments, the spatial distribution of the one or more MR parameters specifies a spatial distribution of nuclear density in the sample (e.g., in the image). Thus, in some embodiments, the MR signal may be determined in a qualitative measurement (such as MRI or another MR measurement technique). Thus, in these embodiments, the MR device may be an MR scanner.
Alternatively, in some embodiments, the spatial distribution of one or more MR parameters may correspond to the model parameters previously discussed. Thus, in some embodiments, the MR signal may be determined in a quantitative measurement (such as TFM, MR fingerprinting, or another quantitative MR measurement technique).
In some embodiments of methods 200 (fig. 2) and/or 700, additional or fewer operations may be present. Further, the order of the operations may be changed, and/or two or more operations may be combined into a single operation.
Fig. 8 presents a diagram illustrating an example of communication between components in the system 100 (fig. 1) and the measurement device 114. Notably, the processor 810 in the computer 116 can execute program instructions (p.i.)812 stored in the memory 814. When the processor 810 executes the program instructions 812, the processor 810 may perform at least some of the operations in the computing technology.
During computing techniques, the processor 810 may provide instructions 818 to an interface circuit (i.c.) 816. In response, the interface circuitry 816 may provide instructions 818 to the measurement apparatus 114 (e.g., an MR device) to acquire MR signals 820 associated with the sample, which are then provided to the computer 116. Note that in some embodiments, the measurement device 114 can include a source, such as a source that provides an external magnetic field, a gradient magnetic field, and/or an RF pulse sequence to the sample.
Upon receiving the MR signals 820, the interface circuitry 816 may provide the MR signals 820 to the processor 810. Then, the processor 810 may access a set of predetermined coil magnetic field basis vectors (s.c.m.f.b.v.)822 in the memory 814, wherein the weighted superposition of the set of predetermined coil magnetic field basis vectors 822 may represent coil sensitivities of coils in the measurement device 114, and the given predetermined coil magnetic field basis vector may be a solution to maxwell's equations.
Next, the processor 810 may use the MR signals 820 and the set of predetermined coil magnetic field basis vectors 822 to solve a non-linear optimization problem for the MR information 824 and coefficients 826 in the weighted superposition on a voxel-by-voxel basis in the sample. Further, processor 810 may perform additional acts 828. For example, processor 810 may: the MR information 824 and/or coefficients 826 are provided to a user or another electronic device via interface circuitry 816, the MR information 824 and/or coefficients 826 are stored in memory 814, and/or the MR information 824 and/or coefficients 826 may be presented on a display 830.
Although communication between components in fig. 3 and/or 8 is illustrated as having single-sided or double-sided communication (e.g., lines with single or double arrows), in general, a given communication operation may be single-sided or double-sided.
In some embodiments, the computational techniques address MRI reconstruction using multiple MR coils and undersampled k-space measurements. By addressing this problem, computational techniques can significantly reduce MR acquisition or scanning time without compromising the quality of the recovered or reconstructed image. This problem is known as "parallel imaging" or MRI parallel imaging.
The problem solved by computational techniques is ill-posed due to the limited or reduced number of k-space measurements and the presence of noise. This means that there is no unique solution and additional a priori knowledge about the underlying Weighted Proton Density (WPD) attribute (in the previous discussion, WPD is sometimes referred to as proton density or nuclear density) may need to be utilized in order to obtain a solution with physical significance. Furthermore, another challenge of parallel imaging is that, in addition to WPD (the number that needs to be estimated accurately), the MR coil sensitivity is also unknown.
To address this problem, computational techniques or maxwell parallel imaging techniques may use iterative gaussian-newton regularization techniques to solve a bilinear problem with WPD and coil sensitivity. For example, the computational techniques may include explicit regularizers on WPD and implicit regularizers on coil sensitivity.
In some embodiments, the regulariser on WPD may be quadratic and as a regulariser involve: identity operators, gradients, Hessian, Laplacian, or non-smooth convex regularizers (e.g., gross variation or gross structural variation). In the case of quadratic regularisers, because the data fidelity term is also quadratic, an iterative solution can be obtained by solving the augmented gauss-newton normal equation. For example, the amplified Gaussian-Newton normal equations may be solved by using a conjugate gradient technique. Alternatively, when the regulons on the WPD are non-smooth convex functions, the solution in each gauss-newton iteration can be obtained by employing an accelerated near-end gradient technique (e.g., FISTA).
Furthermore, the implicit regularization of coil sensitivity may be different from existing methods. Notably, implicit regularization of coil sensitivity may force the resulting coil sensitivity (which is essentially the circularly polarized magnetic field received by the coil) to be smooth. In implicit regularization of coil sensitivity, stronger physical-based constraints may be imposed. More precisely, a complete (up to e.g. 85%, 95% or 99% numerical accuracy) basis of a circularly polarized magnetic field may be generated. This basis may be supported in the field of view of the MR scanner (or more generally, the MR device) for a given set of MR coils. For example, the basis may be determined using a random singular value decomposition of a matrix that maps circularly polarized magnetic fields within a field of view from a set of tens of thousands or more dipole sources onto a surface surrounding the field of view and located near a given MR coil. The calculation of the magnetic field for these current sources may involve the use of a full-wave electromagnetic solver using state-of-the-art volume-integral equation techniques.
Thus, in the resulting non-linear optimization problem, the coefficients of this basis can be determined, rather than the actual coil sensitivities or magnetic fields. This approach can ensure that the coil sensitivity is not only smooth, but it satisfies maxwell's equations by construction, which is a stronger constraint (and closer to reality). Furthermore, due to the smoothness of the coil sensitivities, only a small number of members of this basis may be required for high fidelity coil sensitivity estimation. This capability can translate into orders of magnitude less parameters in the associated nonlinear optimization problem. Furthermore, the maxwell parallel imaging technique is applicable to any (i.e., any) magnetic field strength of the MR scanner or MR device (e.g., from a few millitesla to 11 tesla or stronger external magnetic field strengths) without modification.
Thus, maxwell parallel imaging techniques may provide an estimate of WPD and an accurate estimate of coil sensitivity. To further enhance the quality or result of the WPD image, in some embodiments, the WPD image may be denoised by solving a constrained optimization problem. Notably, the solution of the total variation or the structural total variation is minimized under the constraint that the norm of the difference of the input and the solution is less than or equal to an amount proportional to the standard deviation of the noise. Note that the standard deviation can be calculated directly from the WPD previously estimated in maxwell parallel imaging techniques.
Alternatively, the estimated coil sensitivities previously determined in the maxwell parallel imaging technique can be used to define the original non-linearity problem as a linearity problem. This linearity problem may still be ill-posed due to undersampling of k-space. The final estimate of the WPD image can then be obtained as a solution to the constrained convex optimization problem. Notably, the improved estimation of the WPD image may correspond to a minimizer of the total variation or the total variation of the structure subject to a number of constraints, which may be equal to the number of MR coil measurements. Each constraint may force the norm of the difference of the coil measurements and the corresponding observation or estimation model (which relates to the solution) to be less than or equal to an amount proportional to the standard deviation of the noise affecting the particular coil measurement. These operations may provide a parametric-free denoising technique.
We now further describe an electronic device that performs at least some of the operations in the computing techniques. FIG. 9 presents a block diagram illustrating an electronic device 900, such as computer 116 (FIG. 1), in system 100 (FIG. 1) or another computer-controlled component, such as source 110 or measuring device 114 (FIG. 1), in system 100. This electronic device includes a processing subsystem 910, a memory subsystem 912, and a networking subsystem 914. Processing subsystem 910 may include one or more devices configured to perform computing operations and control components in system 100 (fig. 1). For example, processing subsystems 910 may include one or more microprocessors or Central Processing Units (CPUs), one or more Graphics Processing Units (GPUs), Application Specific Integrated Circuits (ASICs), microcontrollers, programmable logic devices (such as field programmable logic arrays or FPGAs), and/or one or more Digital Signal Processors (DSPs).
The memory subsystem 912 may include one or more means for storing data and/or instructions for the processing subsystem 910 and the networking subsystem 914. For example, the memory subsystem 912 may include Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), and/or other types of memory. In some embodiments, instructions for processing subsystem 910 in memory subsystem 912 include one or more program modules or sets of instructions (e.g., program instructions 924) that are executable by processing subsystem 910 in an operating environment (e.g., operating system 922). Note that the one or more computer programs may constitute a computer program mechanism or a program module (i.e., software). Further, instructions in various modules in memory subsystem 912 may be implemented as: a high-level programming language, an object-oriented programming language, and/or an assembly or machine language. Further, a programming language, such as configurable or configured (which may be used interchangeably in this discussion), may be compiled or interpreted for execution by the processing subsystem 910.
Further, memory subsystem 912 may include mechanisms for controlling access to memory. In some embodiments, memory subsystem 912 includes a memory hierarchy that includes one or more caches coupled to memory in electronic device 900. In some of these embodiments, one or more of a cache is located in processing subsystem 910.
In some embodiments, memory subsystem 912 is coupled to one or more high capacity mass storage devices (not shown). For example, the memory subsystem 912 may be coupled to a magnetic or optical drive, a solid state drive, or another type of mass storage device. In these embodiments, memory subsystem 912 may be used by electronic device 900 as a fast access memory for commonly used data, while mass storage devices are used to store less commonly used data.
In some embodiments, memory subsystem 912 includes a remotely located archive. Such an archive may be a mass network attached mass storage device, such as: network Attached Storage (NAS), external hard drives, storage servers, server clusters, cloud storage providers, cloud computing providers, tape backup systems, medical record archiving services, and/or other types of archives. Further, processing subsystem 910 may interact with the archival device via an application programming interface to store and/or access information from the archival device. Note that memory subsystem 912 and/or electronic device 900 may comply with the health insurance portability and accountability act.
An example of data stored (locally and/or remotely) in memory subsystem 912 is shown in fig. 10, and fig. 10 presents a diagram illustrating an example of a data structure 1000 used by electronic device 900 (fig. 9). This data structure may include: an identifier 1010-1 of a sample 1008-1 (e.g., an individual), metadata 1012 (e.g., age, gender, biopsy results and diagnosis (if one has been made), other sample information, demographic information, family history, etc.), a timestamp 1014 when the data was acquired, measurements 1016 received (e.g., MR signals, and more generally, raw data), excitation and measurement conditions 1018 (e.g., external magnetic fields, optional gradients, RF pulse sequences, MR devices, locations, machine-specific characteristics, such as magnetic field inhomogeneity, RF noise and one or more other system defects, signal processing techniques, registration information, synchronization information between measurements and heart or breathing patterns of the individual, etc.), and/or determined model parameters 1020 (including voxel size, velocity, resonance frequency or type of nuclei, T < v >1And T2Relaxation time, segmentation information, classification information, etc.), environmental conditions 1022 (e.g., temperature, humidity, and/or atmospheric pressure in a room or chamber in which sample 1008-1 is measured), forward model 1024, objects of sample 1008-1One or more additional measurements 1026 of a physical attribute (e.g., weight, size, image, etc.), an optional detected abnormality 1028 (which may include a particular voxel(s) associated with one or more of the detected abnormalities 1028), and/or an optional classification 1030 of one or more detected abnormalities 1028. Note that data structure 1000 may include multiple entries for different measurements.
In one embodiment, the data in the data structure 1000 is encrypted using a blockchain or similar cryptographic hashing technique to detect unauthorized modification or corruption of records. Further, the data may be anonymized prior to storage such that the identity of the individual associated with the sample is anonymous unless the individual allows or authorizes access to or publication of the identity of the individual.
Returning to fig. 9, networking subsystem 914 may include one or more devices configured to couple to and communicate over a wired, optical, and/or wireless network (i.e., perform network operations, and more generally communications), including: control logic 916, interface circuitry 918, one or more antennas 920, and/or input/output (I/O) ports 928. (although fig. 9 includes one or more antennas 920, in some embodiments, electronic device 900 includes one or more nodes 908, such as pads or connectors, which may be coupled to the one or more antennas 920. thus, electronic device 900 may or may not include the one or more antennas 920.) for example, networking subsystem 914 may include a bluetooth networking system (which may include bluetooth low energy, BLE, or bluetooth LE), a cellular networking system (e.g., a 3G/4G/5G network such as UMTS, LTE, etc.), a Universal Serial Bus (USB) networking system, a networking system based on the standards described in IEEE 802.11 (e.g., a Wi-Fi networking system), an ethernet networking system, and/or another networking system.
Further, networking subsystem 914 may include a processor, controller, radio/antenna, socket/plug, and/or other means for interfacing, communicating, and processing data and events for each supported networking system. Note that the mechanisms used to couple, communicate, and process data and events for each network system over the network are sometimes collectively referred to as the "network interface" of networking subsystem 914. Furthermore, in some embodiments, a "network" between components in system 100 (FIG. 1) does not yet exist. Thus, electronic device 900 may use mechanisms in networking subsystem 914 to perform simple wireless communications between components, such as transmitting advertisement or beacon frames and/or scanning for advertisement frames transmitted by other components.
Within electronic device 900, processing subsystem 910, memory subsystem 912, and networking subsystem 914 may be coupled using one or more interconnects, such as a bus 926. These interconnects may include electrical, optical, and/or electro-optical connections that the subsystems may use to communicate commands and data between each other. Although only one bus 926 is shown for clarity, different embodiments may include different numbers or configurations of electrical, optical, and/or electro-optical connections between subsystems.
The electronic device 900 may (or may) be included in a wide variety of electronic devices. For example, the electronic device 900 may be included in: tablet computers, smart phones, smart watches, portable computing devices, wearable devices, test equipment, digital signal processors, clusters of computing devices, laptop computers, desktop computers, servers, sub-notebooks/netbooks, and/or another computing device.
Although electronic device 900 is described using particular components, in alternative embodiments, different components and/or subsystems may be present in electronic device 900. For example, electronic device 900 may include one or more additional processing subsystems, memory subsystems, and/or networking subsystems. Further, one or more subsystems may not be present in electronic device 900. Further, in some embodiments, electronic device 900 may include one or more additional subsystems not shown in fig. 9.
Although separate subsystems are shown in fig. 9, in some embodiments, some or all of a given subsystem or component may be integrated into one or more of the other subsystems or components in electronic device 900. For example, in some embodiments, the program instructions 924 are included in the operating system 922. In some embodiments, components in a given subsystem are included in different subsystems. Further, in some embodiments, electronic device 900 is located at a single geographic location or distributed across multiple different geographic locations.
Further, the circuits and components in the electronic device 900 may be implemented using any combination of analog and/or digital circuits, including: bipolar, PMOS and/or NMOS gates or transistors. Further, the signals in these embodiments may include digital signals having approximately discrete values and/or analog signals having continuous values. Further, the components and circuits may be single ended or differential, and the power supply may be unipolar or bipolar.
The integrated circuit may implement some or all of the functionality of networking subsystem 914 (such as a radio) and, more generally, some or all of the functionality of electronic device 900. Further, the integrated circuit may include hardware and/or software mechanisms for transmitting wireless signals from the electronic device 900 and receiving signals at the electronic device 900 from other components in the system 100 (fig. 1) and/or from electronic devices external to the system 100 (fig. 1). Radios other than the mechanisms described herein are well known in the art and therefore not described in detail. In general, networking subsystem 914 and/or integrated circuit may include any number of radios. Note that the radios in the multi-radio embodiment function in a similar manner to the radios described in the single-radio embodiment.
Although some of the operations in the foregoing embodiments are implemented in hardware or software, in general, the operations in the foregoing embodiments may be implemented in a variety of configurations and architectures. Accordingly, some or all of the operations in the foregoing embodiments may be performed in hardware, software, or both.
Further, in some of the foregoing embodiments, there are fewer components, more components, changing the location of components, and/or combining two or more components.
Although the foregoing discussion describes computational techniques for solving vector wave equations, in other embodiments computational techniques may be used for solving scalar equations. For example, the acoustic wave equation can be solved in any inhomogeneous medium based on ultrasound measurements using a forward model. (thus, in some embodiments, the excitation may be mechanical.) note that the acoustic coupling at the ultrasonic measurement may be operator dependent (i.e., the ultrasonic measurement may be pressure dependent). Nevertheless, a similar approach can be used for: improve ultrasound imaging, determine 3D structures, facilitate improved presentations, and the like.
In the foregoing description, we have referred to "some embodiments". Note that "some embodiments" describe a subset of all possible embodiments, but do not always specify the same subset of embodiments. Further, note that the numerical values in the foregoing embodiments are illustrative examples of some embodiments. In other embodiments of the computing technique, different values may be used.
The previous description is intended to enable any person skilled in the art to make and use the disclosure, and is provided in the context of a particular application and its requirements. Furthermore, the foregoing descriptions of the embodiments of the present disclosure are presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the disclosure to the forms disclosed. Accordingly, many modifications and variations will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Furthermore, the discussion of the preceding embodiments is not intended to limit the present disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.

Claims (32)

1. A method for determining coefficients in a representation of coil sensitivities and MR information associated with a sample, comprising:
by a computer:
acquiring Magnetic Resonance (MR) signals associated with the sample from a measurement device or memory;
accessing a set of predetermined coil magnetic field basis vectors, wherein coil sensitivities of coils in the measurement device are represented by a weighted superposition of the set of predetermined coil magnetic field basis vectors using the coefficients, and wherein the predetermined coil magnetic field basis vectors are solutions to Maxwell's equations; and
solving a non-linear optimization problem of the MR information and the coefficients associated with the sample using the set of MR signals and predetermined coil magnetic field basis vectors.
2. The method of claim 1, wherein a given coil sensitivity is represented by a linear superposition of the product of the coefficient and a predetermined coil magnetic field basis vector of the set of predetermined coil magnetic field basis vectors.
3. The method of claim 1, wherein the nonlinear optimization problem includes a term corresponding to a squared absolute value of a difference between the MR signal and an estimated MR signal corresponding to the MR information; and is
Wherein the term includes a contribution from the coil sensitivity of the coil in the measurement device.
4. The method of claim 1, wherein the non-linear optimization problem includes one or more constraints on the reduction or minimization of the term; and is
Wherein the one or more constraints include regularisers corresponding to a spatial distribution of the MR information.
5. The method of claim 1, wherein the MR information includes an image having a spatial distribution of one or more MR parameters in a voxel associated with the sample, the voxel specified by the MR signal.
6. The method of claim 1, wherein the MR signals correspond to Magnetic Resonance Imaging (MRI) or another MR measurement technique.
7. The method of claim 1, wherein the MR information comprises quantitative values of one or more MR parameters in voxels associated with the sample, the voxels being specified by the MR signals.
8. The method of claim 1, wherein the measurement device performs tensor field mapping, MR fingerprinting, or another quantitative MR measurement technique.
9. The method of claim 1, wherein the nonlinear optimization problem is solved iteratively until a convergence criterion is reached.
10. The method of claim 1, wherein the nonlinear optimization problem is solved using a pre-trained neural network or a pre-trained machine learning model that maps the set of MR signals and coil magnetic field basis vectors to the MR information and the coefficients.
11. The method of claim 1, wherein solving the nonlinear optimization problem reconstructs MR scan lines that are skipped during measurements performed by the measurement device.
12. The method of claim 1, wherein an MR scan time of measurements performed by the measurement device is reduced relative to a Magnetic Resonance Imaging (MRI) parallel imaging technique.
13. A computer, comprising:
an interface circuit configured to communicate with a measurement device;
a memory configured to store program instructions; and
a processor configured to execute the program instructions, wherein the program instructions, when executed by the processor, cause the computer to perform operations comprising:
acquiring Magnetic Resonance (MR) signals associated with a sample from the measurement device or the memory;
accessing a set of predetermined coil magnetic field basis vectors, wherein coil sensitivities of coils in the measurement device are represented by a weighted superposition of the set of predetermined coil magnetic field basis vectors using the coefficients, and wherein the predetermined coil magnetic field basis vectors are solutions to Maxwell's equations; and
solving a non-linear optimization problem of the MR information and the coefficients associated with the sample using the set of MR signals and predetermined coil magnetic field basis vectors.
14. The computer of claim 13, wherein the nonlinear optimization problem includes a term corresponding to a squared absolute value of a difference between the MR signal and an estimated MR signal corresponding to the MR information; and is
Wherein the term includes a contribution from the coil sensitivity of the coil in the measurement device.
15. The computer of claim 13, wherein the MR information includes an image having a spatial distribution of one or more MR parameters in a voxel associated with the sample, the voxel specified by the MR signal.
16. The computer of claim 13, wherein the MR signals correspond to Magnetic Resonance Imaging (MRI) or another MR measurement technique.
17. The computer of claim 13, wherein the MR information includes quantitative values of one or more MR parameters in a voxel associated with the sample, the voxel specified by the MR signal.
18. The computer of claim 13, wherein the MR signals correspond to tensor field mapping, MR fingerprinting, or another quantitative MR measurement technique.
19. The computer of claim 13, wherein the nonlinear optimization problem is solved iteratively until a convergence criterion is reached.
20. The computer of claim 13, wherein the nonlinear optimization problem is solved using a pre-trained neural network or a pre-trained machine learning model that maps the set of MR signals and coil magnetic field basis vectors to the MR information and the coefficients.
21. The computer of claim 13, wherein solving the nonlinear optimization problem reconstructs MR scan lines that are skipped during measurements performed by the measurement device.
22. The computer of claim 13, wherein an MR scan time of measurements performed by the measurement device is reduced relative to a Magnetic Resonance Imaging (MRI) parallel imaging technique.
23. A non-transitory computer-readable storage medium for use in conjunction with a computer, the computer-readable storage medium configured to store program instructions that, when executed by the computer, cause the computer to perform operations comprising:
acquiring Magnetic Resonance (MR) signals associated with the sample from a measurement device or memory;
accessing a set of predetermined coil magnetic field basis vectors, wherein coil sensitivities of coils in the measurement device are represented by a weighted superposition of the set of predetermined coil magnetic field basis vectors using the coefficients, and wherein the predetermined coil magnetic field basis vectors are solutions to Maxwell's equations; and
solving a non-linear optimization problem of the MR information and the coefficients associated with the sample using the set of MR signals and predetermined coil magnetic field basis vectors.
24. The non-transitory computer-readable storage medium of claim 23, wherein the non-linear optimization problem includes a term corresponding to a squared absolute value of a difference between the MR signal and an estimated MR signal corresponding to the MR information; and is
Wherein the term includes a contribution from the coil sensitivity of the coil in the measurement device.
25. The non-transitory computer-readable storage medium of claim 23, wherein the MR information includes an image having a spatial distribution of one or more MR parameters in a voxel associated with the sample, the voxel specified by the MR signals.
26. The non-transitory computer-readable storage medium of claim 23, wherein the MR signals correspond to Magnetic Resonance Imaging (MRI) or another MR measurement technique.
27. The non-transitory computer-readable storage medium of claim 23, wherein the MR information includes quantitative values of one or more MR parameters in voxels associated with the sample, the voxels specified by the MR signals.
28. The non-transitory computer-readable storage medium of claim 23, wherein the MR signals correspond to tensor field mapping, MR fingerprinting, or another quantitative MR measurement technique.
29. The non-transitory computer-readable storage medium of claim 23, wherein the non-linear optimization problem is solved iteratively until a convergence criterion is reached.
30. The non-transitory computer-readable storage medium of claim 23, wherein the non-linear optimization problem is solved using a pre-trained neural network or a pre-trained machine learning model that maps the set of MR signals and coil magnetic field basis vectors to the MR information and the coefficients.
31. The non-transitory computer-readable storage medium of claim 23, wherein solving the non-linear optimization problem reconstructs MR scan lines that are skipped during measurements performed by the measurement device.
32. The non-transitory computer-readable storage medium of claim 23, wherein an MR scan time of measurements performed by the measurement device is reduced relative to a Magnetic Resonance Imaging (MRI) parallel imaging technique.
CN202080064460.4A 2019-09-27 2020-09-25 Maxwell Wei Binghang imaging Active CN114450599B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962907516P 2019-09-27 2019-09-27
US62/907,516 2019-09-27
PCT/US2020/052717 WO2021062154A1 (en) 2019-09-27 2020-09-25 Maxwell parallel imaging

Publications (2)

Publication Number Publication Date
CN114450599A true CN114450599A (en) 2022-05-06
CN114450599B CN114450599B (en) 2023-07-07

Family

ID=75163078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080064460.4A Active CN114450599B (en) 2019-09-27 2020-09-25 Maxwell Wei Binghang imaging

Country Status (9)

Country Link
US (1) US11131735B2 (en)
EP (1) EP4034901A4 (en)
JP (1) JP2022548830A (en)
KR (1) KR102622283B1 (en)
CN (1) CN114450599B (en)
BR (1) BR112022004126A2 (en)
CA (1) CA3153503A1 (en)
MX (1) MX2022003462A (en)
WO (1) WO2021062154A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11354586B2 (en) 2019-02-15 2022-06-07 Q Bio, Inc. Model parameter determination using a predictive model
US11223543B1 (en) * 2020-09-29 2022-01-11 Dell Products L.P. Reconstructing time series datasets with missing values utilizing machine learning
WO2022271286A1 (en) * 2021-06-21 2022-12-29 Q Bio, Inc. Maxwell parallel imaging
CN113779818B (en) * 2021-11-15 2022-02-08 中南大学 Three-dimensional geologic body electromagnetic field numerical simulation method, device, equipment and medium thereof
WO2023096557A1 (en) * 2021-11-24 2023-06-01 Corsmed Ab A method for image parameter comparison in magnetic resonance imaging simulation

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6528998B1 (en) * 2000-03-31 2003-03-04 Ge Medical Systems Global Technology Co., Llc Method and apparatus to reduce the effects of maxwell terms and other perturbation magnetic fields in MR images
US20050251023A1 (en) * 2004-04-30 2005-11-10 Stephan Kannengiesser Method and MR apparatus for PPA MR imaging with radial data acquisition
US7423430B1 (en) * 2007-04-06 2008-09-09 The Board Of Trustees Of The University Of Illinois Adaptive parallel acquisition and reconstruction of dynamic MR images
CN102947721A (en) * 2010-06-23 2013-02-27 皇家飞利浦电子股份有限公司 Method of reconstructing a magnetic resonance image of an object considering higher-order dynamic fields
WO2017007663A1 (en) * 2015-07-07 2017-01-12 Tesla Health, Inc Field-invariant quantitative magnetic-resonance signatures
US20170123029A1 (en) * 2015-10-29 2017-05-04 Siemens Healthcare Gmbh Method and magnetic resonance apparatus for maxwell compensation in simultaneous multislice data acquisitions
US20170285123A1 (en) * 2016-04-03 2017-10-05 Q Bio, Inc Tensor field mapping
CN110140042A (en) * 2016-05-31 2019-08-16 Q生物公司 Tensor field mapping

Family Cites Families (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4729892A (en) 1986-03-21 1988-03-08 Ciba-Geigy Corporation Use of cross-linked hydrogel materials as image contrast agents in proton nuclear magnetic resonance tomography and tissue phantom kits containing such materials
US5486762A (en) 1992-11-02 1996-01-23 Schlumberger Technology Corp. Apparatus including multi-wait time pulsed NMR logging method for determining accurate T2-distributions and accurate T1/T2 ratios and generating a more accurate output record using the updated T2-distributions and T1/T2 ratios
US6392409B1 (en) 2000-01-14 2002-05-21 Baker Hughes Incorporated Determination of T1 relaxation time from multiple wait time NMR logs acquired in the same or different logging passes
US6678669B2 (en) 1996-02-09 2004-01-13 Adeza Biomedical Corporation Method for selecting medical and biochemical diagnostic tests using neural network-related applications
US5793210A (en) 1996-08-13 1998-08-11 General Electric Company Low noise MRI scanner
US6084408A (en) 1998-02-13 2000-07-04 Western Atlas International, Inc. Methods for acquisition and processing of nuclear magnetic resonance signals for determining fluid properties in petroleum reservoirs having more than one fluid phase
US6148272A (en) 1998-11-12 2000-11-14 The Regents Of The University Of California System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid
US8781557B2 (en) 1999-08-11 2014-07-15 Osteoplastics, Llc Producing a three dimensional model of an implant
US6476606B2 (en) * 1999-12-03 2002-11-05 Johns Hopkins University Method for parallel spatial encoded MRI and apparatus, systems and other methods related thereto
EP1330671B1 (en) 2000-09-18 2008-05-07 Vincent Lauer Confocal optical scanning device
WO2002067202A1 (en) 2001-02-16 2002-08-29 The Government Of The United States Of America, Represented By The Secretary, Department Of Health And Human Services Real-time, interactive volumetric mri
US20020155587A1 (en) 2001-04-20 2002-10-24 Sequenom, Inc. System and method for testing a biological sample
US6838875B2 (en) 2002-05-10 2005-01-04 Schlumberger Technology Corporation Processing NMR data in the presence of coherent ringing
US7253619B2 (en) 2003-04-04 2007-08-07 Siemens Aktiengesellschaft Method for evaluating magnetic resonance spectroscopy data using a baseline model
US7596402B2 (en) * 2003-05-05 2009-09-29 Case Western Reserve University MRI probe designs for minimally invasive intravascular tracking and imaging applications
US20050096534A1 (en) 2003-10-31 2005-05-05 Yudong Zhu Systems and methods for calibrating coil sensitivity profiles
US7820398B2 (en) 2003-11-06 2010-10-26 Grace Laboratories Inc. Immunosorbent blood tests for assessing paroxysmal cerebral discharges
US7940927B2 (en) 2005-04-27 2011-05-10 Panasonic Corporation Information security device and elliptic curve operating device
WO2008011112A2 (en) 2006-07-19 2008-01-24 University Of Connecticut Method and apparatus for medical imaging using combined near-infrared optical tomography, fluorescent tomography and ultrasound
US7974942B2 (en) 2006-09-08 2011-07-05 Camouflage Software Inc. Data masking system and method
US20080082837A1 (en) 2006-09-29 2008-04-03 Protegrity Corporation Apparatus and method for continuous data protection in a distributed computing network
EP2082218A2 (en) 2006-10-03 2009-07-29 Oklahoma Medical Research Foundation Metabolite detection using magnetic resonance
US7777487B2 (en) * 2007-02-15 2010-08-17 Uwm Research Foundation, Inc. Methods and apparatus for joint image reconstruction and coil sensitivity estimation in parallel MRI
WO2008112146A2 (en) 2007-03-07 2008-09-18 The Trustees Of The University Of Pennsylvania 2d partially parallel imaging with k-space surrounding neighbors based data reconstruction
EP2156206A1 (en) 2007-05-31 2010-02-24 Koninklijke Philips Electronics N.V. Method of automatically acquiring magnetic resonance image data
WO2009067680A1 (en) 2007-11-23 2009-05-28 Mercury Computer Systems, Inc. Automatic image segmentation methods and apparartus
US7924002B2 (en) 2008-01-09 2011-04-12 The Board Of Trustees Of The Leland Stanford Junior University Magnetic resonance field map estimation for species separation
US20090240138A1 (en) 2008-03-18 2009-09-24 Steven Yi Diffuse Optical Tomography System and Method of Use
WO2009129265A1 (en) 2008-04-14 2009-10-22 Huntington Medical Research Institutes Methods and apparatus for pasadena hyperpolarization
DE102008029897B4 (en) 2008-06-24 2017-12-28 Siemens Healthcare Gmbh Method for recording MR data of a measurement object in an MR examination in a magnetic resonance system and correspondingly configured magnetic resonance system
US8078554B2 (en) 2008-09-03 2011-12-13 Siemens Medical Solutions Usa, Inc. Knowledge-based interpretable predictive model for survival analysis
CN102159965B (en) 2008-09-17 2014-09-24 皇家飞利浦电子股份有限公司 B1-mapping and b1l-shimming for mri
EP2165737A1 (en) 2008-09-18 2010-03-24 Koninklijke Philips Electronics N.V. Ultrasonic treatment apparatus with a protective cover
EP2189925A3 (en) 2008-11-25 2015-10-14 SafeNet, Inc. Database obfuscation system and method
JP5212122B2 (en) 2009-01-09 2013-06-19 ソニー株式会社 Biological sample image acquisition apparatus, biological sample image acquisition method, and program
DE102009014924B3 (en) 2009-03-25 2010-09-16 Bruker Biospin Mri Gmbh Reconstruction of spectral or image files with simultaneous excitation and detection in magnetic resonance
US8108311B2 (en) 2009-04-09 2012-01-31 General Electric Company Systems and methods for constructing a local electronic medical record data store using a remote personal health record server
US8645164B2 (en) 2009-05-28 2014-02-04 Indiana University Research And Technology Corporation Medical information visualization assistant system and method
US10102398B2 (en) 2009-06-01 2018-10-16 Ab Initio Technology Llc Generating obfuscated data
RU2013104364A (en) * 2010-07-02 2014-08-10 Конинклейке Филипс Электроникс Н.В. COMPUTER SOFTWARE PRODUCED BY A COMPUTER METHOD AND MAGNETIC RESONANT VISUALIZATION SYSTEM FOR PRODUCING A MAGNETIC RESONANT IMAGE
US8686727B2 (en) 2010-07-20 2014-04-01 The Trustees Of The University Of Pennsylvania CEST MRI methods for imaging of metabolites and the use of same as biomarkers
US10148623B2 (en) 2010-11-12 2018-12-04 Time Warner Cable Enterprises Llc Apparatus and methods ensuring data privacy in a content distribution network
CN102654568A (en) 2011-03-01 2012-09-05 西门子公司 Method and device for establishing excitation parameters for mr imaging
EP2681577A1 (en) 2011-03-01 2014-01-08 Koninklijke Philips N.V. Determination of a magnetic resonance imaging pulse sequence protocol classification
US8723518B2 (en) 2011-03-18 2014-05-13 Nicole SEIBERLICH Nuclear magnetic resonance (NMR) fingerprinting
TW201319296A (en) 2011-06-21 2013-05-16 Sumitomo Chemical Co Method for inspecting laminated film and method for manufacturing laminated film
US8861815B2 (en) 2011-08-03 2014-10-14 International Business Machines Corporation Systems and methods for modeling and processing functional magnetic resonance image data using full-brain vector auto-regressive model
CN103189837B (en) 2011-10-18 2016-12-28 松下知识产权经营株式会社 Shuffle mode generative circuit, processor, shuffle mode generate method, order
GB201121307D0 (en) 2011-12-12 2012-01-25 Univ Stavanger Probability mapping for visualisation of biomedical images
US9146293B2 (en) * 2012-02-27 2015-09-29 Ohio State Innovation Foundation Methods and apparatus for accurate characterization of signal coil receiver sensitivity in magnetic resonance imaging (MRI)
US20130294669A1 (en) 2012-05-02 2013-11-07 University Of Louisville Research Foundation, Inc. Spatial-spectral analysis by augmented modeling of 3d image appearance characteristics with application to radio frequency tagged cardiovascular magnetic resonance
US9513359B2 (en) 2012-09-04 2016-12-06 General Electric Company Systems and methods for shim current calculation
EP2897523A4 (en) 2012-09-19 2017-03-08 Case Western Reserve University Nuclear magnetic resonance (nmr) fingerprinting
US9965808B1 (en) 2012-12-06 2018-05-08 The Pnc Financial Services Group, Inc. Systems and methods for projecting and managing cash-in flow for financial accounts
EP2958496B1 (en) 2013-02-25 2017-05-03 Koninklijke Philips N.V. Determination of the concentration distribution of sonically dispersive elements
WO2014205275A1 (en) 2013-06-19 2014-12-24 Office Of Technology Transfer, National Institutes Of Health Mri scanner bore coverings
US8752178B2 (en) 2013-07-31 2014-06-10 Splunk Inc. Blacklisting and whitelisting of security-related events
DE102013218224B3 (en) * 2013-09-11 2015-01-29 Siemens Aktiengesellschaft Determination of B1 cards
US9514169B2 (en) 2013-09-23 2016-12-06 Protegrity Corporation Columnar table data protection
EP3134747B1 (en) 2014-04-25 2021-12-01 Mayo Foundation for Medical Education and Research Integrated image reconstruction and gradient non-linearity correction for magnetic resonance imaging
US10883956B2 (en) 2014-05-27 2021-01-05 Case Western Reserve University Electrochemical sensor for analyte detection
US20150370462A1 (en) 2014-06-20 2015-12-24 Microsoft Corporation Creating calendar event from timeline
US9485088B2 (en) 2014-10-31 2016-11-01 Combined Conditional Access Development And Support, Llc Systems and methods for dynamic data masking
US10716485B2 (en) 2014-11-07 2020-07-21 The General Hospital Corporation Deep brain source imaging with M/EEG and anatomical MRI
EP3093677A1 (en) 2015-05-15 2016-11-16 UMC Utrecht Holding B.V. Time-domain mri
WO2018136705A1 (en) * 2017-01-19 2018-07-26 Ohio State Innovation Foundation Estimating absolute phase of radio frequency fields of transmit and receive coils in a magnetic resonance
US10488352B2 (en) * 2017-01-27 2019-11-26 Saudi Arabian Oil Company High spatial resolution nuclear magnetic resonance logging
EP3457160A1 (en) * 2017-09-14 2019-03-20 Koninklijke Philips N.V. Parallel magnetic resonance imaging with archived coil sensitivity maps
US10712416B1 (en) * 2019-02-05 2020-07-14 GE Precision Healthcare, LLC Methods and systems for magnetic resonance image reconstruction using an extended sensitivity model and a deep neural network
US11143730B2 (en) * 2019-04-05 2021-10-12 University Of Cincinnati System and method for parallel magnetic resonance imaging

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6528998B1 (en) * 2000-03-31 2003-03-04 Ge Medical Systems Global Technology Co., Llc Method and apparatus to reduce the effects of maxwell terms and other perturbation magnetic fields in MR images
US20050251023A1 (en) * 2004-04-30 2005-11-10 Stephan Kannengiesser Method and MR apparatus for PPA MR imaging with radial data acquisition
US7423430B1 (en) * 2007-04-06 2008-09-09 The Board Of Trustees Of The University Of Illinois Adaptive parallel acquisition and reconstruction of dynamic MR images
CN102947721A (en) * 2010-06-23 2013-02-27 皇家飞利浦电子股份有限公司 Method of reconstructing a magnetic resonance image of an object considering higher-order dynamic fields
WO2017007663A1 (en) * 2015-07-07 2017-01-12 Tesla Health, Inc Field-invariant quantitative magnetic-resonance signatures
US20170123029A1 (en) * 2015-10-29 2017-05-04 Siemens Healthcare Gmbh Method and magnetic resonance apparatus for maxwell compensation in simultaneous multislice data acquisitions
US20170285123A1 (en) * 2016-04-03 2017-10-05 Q Bio, Inc Tensor field mapping
CN110140042A (en) * 2016-05-31 2019-08-16 Q生物公司 Tensor field mapping

Also Published As

Publication number Publication date
EP4034901A4 (en) 2023-11-01
JP2022548830A (en) 2022-11-22
MX2022003462A (en) 2022-04-19
US20210096203A1 (en) 2021-04-01
WO2021062154A1 (en) 2021-04-01
CN114450599B (en) 2023-07-07
KR20220070502A (en) 2022-05-31
CA3153503A1 (en) 2021-04-01
BR112022004126A2 (en) 2022-05-31
EP4034901A1 (en) 2022-08-03
US11131735B2 (en) 2021-09-28
KR102622283B1 (en) 2024-01-08

Similar Documents

Publication Publication Date Title
JP7438230B2 (en) Determining model parameters using a predictive model
US11085984B2 (en) Tensor field mapping
US11360166B2 (en) Tensor field mapping with magnetostatic constraint
US10359486B2 (en) Rapid determination of a relaxation time
CN114450599B (en) Maxwell Wei Binghang imaging
US9958521B2 (en) Field-invariant quantitative magnetic-resonance signatures
KR102506092B1 (en) Tensor field mapping
KR102554506B1 (en) Field-invariant quantitative magnetic resonance signatures
US20230204700A1 (en) Sparse representation of measurements
US11614509B2 (en) Maxwell parallel imaging
WO2022271286A1 (en) Maxwell parallel imaging
US20240005514A1 (en) Dynamic segmentation of anatomical structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant