WO2020072147A1 - Phase velocity imaging using an imaging system - Google Patents

Phase velocity imaging using an imaging system

Info

Publication number
WO2020072147A1
WO2020072147A1 PCT/US2019/048519 US2019048519W WO2020072147A1 WO 2020072147 A1 WO2020072147 A1 WO 2020072147A1 US 2019048519 W US2019048519 W US 2019048519W WO 2020072147 A1 WO2020072147 A1 WO 2020072147A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
frequency
wavenumber
phase velocity
recited
Prior art date
Application number
PCT/US2019/048519
Other languages
French (fr)
Inventor
Piotr KIJANKA
Matthew W. Urban
Original Assignee
Mayo Foundation For Medical Education And Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mayo Foundation For Medical Education And Research filed Critical Mayo Foundation For Medical Education And Research
Priority to JP2021518665A priority Critical patent/JP2022504299A/en
Priority to EP19765946.9A priority patent/EP3861368A1/en
Priority to US17/282,569 priority patent/US20210341429A1/en
Publication of WO2020072147A1 publication Critical patent/WO2020072147A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52036Details of receivers using analysis of echo signal for target characterisation
    • G01S7/52042Details of receivers using analysis of echo signal for target characterisation determining elastic properties of the propagation medium or of the reflective target
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0066Optical coherence imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/485Diagnostic techniques involving measuring strain or elastic properties
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • G01N29/0672Imaging by acoustic tomography
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/07Analysing solids by measuring propagation velocity or propagation time of acoustic waves
    • G01N29/075Analysing solids by measuring propagation velocity or propagation time of acoustic waves by measuring or comparing phase angle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/449Statistical methods not provided for in G01N29/4409, e.g. averaging, smoothing and interpolation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/46Processing the detected response signal, e.g. electronic circuits specially adapted therefor by spectral analysis, e.g. Fourier analysis or wavelet analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/5205Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/523Details of pulse systems
    • G01S7/526Receivers
    • G01S7/53Means for transforming coordinates or for evaluating data, e.g. using computers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0004Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by the type of physiological signal transmitted
    • A61B5/0013Medical image data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • A61B8/565Details of data transmission or power supply involving data transmission via a network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/01Indexing codes associated with the measuring variable
    • G01N2291/012Phase angle

Definitions

  • Soft tissue pathology can cause changes in mechanical properties.
  • Many methods for measurement of soft tissue mechanical properties have been developed over the past two decades.
  • One class of methods called shear wave elastography (“SWE”) encompasses various different implementations including supersonic shear imaging, comb-push ultrasound shear elastography, among others. These methods use acoustic radiation force to generate shear waves and ultrasound techniques to measure the shear wave motion.
  • phase-domain assessment of shear wave velocity has been performed with phase gradient or Fourier transform-based methods. These methods have been used to estimate phase velocity dispersion due to material viscoelasticity and geometry. Most of these measurement techniques, as implemented, use a limited spatial extent both in the lateral and axial directions. To fill that void, efforts have been made to extend these methods to create images of the phase velocity or viscoelastic properties such as the shear wave attenuation and material viscosity.
  • the present disclosure addresses the aforementioned drawbacks by providing a method for generating a phase velocity image from mechanical wave motion data acquired with an ultrasound system.
  • Mechanical wave motion data acquired with an ultrasound system are accessed and from which phase velocity data are generated.
  • a phase velocity image is then generated from the phase velocity data.
  • Optimal acquisition parameters for the ultrasound system are determined by (a) acquiring mechanical wave motion data with the ultrasound system over a range of frequency values, (b) reconstructing phase velocity images from the mechanical wave motion data, and (c) analyzing the phase velocity images to determine whether a plateau in phase velocities is present in the phase velocity images.
  • the acquisition parameters of the ultrasound system are adjusted and steps (a)-(c) are repeated using different frequency values until the plateau in phase velocities is present in the phase velocity images.
  • the adjusted acquisition parameters are stored as the optimal acquisition parameters.
  • the optimal acquisition parameters are associated with an optimal bandwidth of mechanical wave motion in order to improve accuracy of phase velocity measurements.
  • FIG. 1 is a flowchart setting forth the steps of an example method for generating a phase velocity image from wave motion data acquired with an imaging system.
  • FIG. 2 is a flowchart setting forth the steps of an example method for generating a phase velocity image from wave motion data acquired with an imaging system and using a local phase velocity imaging (“LPVI”) algorithm.
  • LPVI local phase velocity imaging
  • FIG. 3 is a flowchart setting forth the steps of an example LPVI method for generating a phase velocity image from wave motion data.
  • FIG. 4 is a block diagram of an example system that can implement the methods described in the present disclosure.
  • FIG. 5 is a block diagram of example hardware components that can be implemented in the system of FIG. 4.
  • FIG. 6 is a block diagram of an example ultrasound imaging system that can be implemented to acquire wave motion data.
  • phase velocity imaging using an ultrasound or other suitable imaging system, such as an optical imaging system or a magnetic resonance imaging (“MRI”) system.
  • MRI magnetic resonance imaging
  • phase velocity images e.g., 2D images, 3D images
  • the wave data may be guided wave data, shear wave data, or other such mechanical wave data.
  • the systems and methods described in the present disclosure operate in the frequency domain and can be implemented using a single frequency or a band of selected frequencies. If there are multiple acoustic radiation force pushes within the field-of-view or multiple wave sources, directional filtering may be performed to separate waves propagating in different directions. The reconstructions described below can be performed for each of these directionally filtered components.
  • LPVI local phase velocity imaging
  • LPVI implements local wavenumber imaging for the purposes of creating images of phase velocity in soft tissues. For instance, local estimates of the wavenumber of propagating guided Lamb waves can be utilized. Wavenumber changes as a function of spatial position can be analyzed with these systems and methods. For instance, new wavenumber components created by abrupt wave changes at a structural discontinuity can be identified in the frequency-wavenumber spectra.
  • LPVI is able to accurately reconstruct 2D wave velocity maps (e.g., phase velocity maps) and provide good contrast between inclusions and surrounding background tissues without artifacts.
  • FIG. 1 a flowchart is illustrated as setting forth the steps of an example method for phase velocity imaging using an imaging system.
  • the wave motion data, v m are accessed by a computer system at step 102 and then optionally directionally filtered, if desired or needed, into q different components at step 102
  • wave motion data are acquired using an acquisition scheme in which multiple wave sources are induced in a single acquisition (e.g., by using multiple different push beams or other wave sources), then directional filtering can advantageously reduce wave interferences due to reflection and refraction. Additionally or alternatively, wavenumber filtering may also be applied to the wave motion data, as indicated at optional process block 106.
  • Accessing the wave motion data can include retrieving wave motion data from a memory or other data storage device or medium.
  • accessing the wave motion data can include acquiring such data with an imaging system and communicating or otherwise transferring the wave motion data to a computer system, which may be a part of the imaging system.
  • the imaging system can be an ultrasound system, an optical imaging system (e.g., an optical coherence tomography system), an MRI system (e.g., by using magnetic resonance elastography techniques), or any other imaging system suitable for imaging propagating mechanical waves (e.g., shear waves, guided waves, other mechanical waves).
  • the wave motion data may be mechanical wave motion data, such as shear wave motion data, guided wave motion data, or the like, and may be indicative of particle displacement, velocity, acceleration, or combinations thereof.
  • the wave motion data may be 3D wave motion data (e.g., two spatial dimensions, one temporal dimension) or 4D wave motion data (e.g., three spatial dimensions, one temporal dimension).
  • phase velocity estimation algorithms can then be employed at step 108 to compute or otherwise generate phase velocity data, which may include calculating the phase velocity components in the lateral, axial, and elevation directions: frequency or a set of selected frequencies (e.g., a frequency band).
  • frequency or a set of selected frequencies e.g., a frequency band.
  • increasing the selected frequency may improve the quality of the reconstructed phase velocity image, in the sense that the phase velocity can be more uniform and because the shape of the inclusion may be more accurately represented.
  • the magnitude of the wave velocity e.g., shear wave velocity
  • the spatial resolution of the reconstructed phase velocity images is determined by the spatial wavelengths, which are shorter at the higher frequencies. This is advantageous for resolving small inclusions with high contrast. In some instances, using a single frequency may not provide adequate signal-to-noise levels. Using a frequency band can preserve the general spatial wavelength, but improve the signal-to- noise.
  • a moving patch of p pixels can be used to calculate the phase velocity.
  • the phase velocity estimation algorithm can be implemented with a hardware processor and a memory, and can include a phase gradient ("PG”) algorithm, a two-dimensional Fourier transform algorithm, a two-point continuous wavelet (“2P-CWT”) algorithm, a multiple signal classification (“MUSIC”) algorithm, an LPVI algorithm, or other suitable phase velocity estimation algorithm.
  • PG phase gradient
  • 2P-CWT two-point continuous wavelet
  • MUSIC multiple signal classification
  • LPVI LPVI algorithm
  • a window can be used for averaging at step 110.
  • the averaging can be
  • m is the pixel lateral dimension
  • n is the pixel axial dimension
  • CCx and CCz are the normalized cross-correlation coefficient along the x- and z-directions.
  • phase velocity components Vx,AH,DF,q[m,n,f ), Vz,AH,DF,q ⁇ m,n, ),
  • VY,AH,DF,q[m,n,f )) can be combined either in two-dimensions or three-dimensions, as indicated at step 112.
  • the phase velocity components can be combined with the following equations:
  • the directionally filtered components can be combined at step 114 to generate a phase velocity image.
  • the directionally filtered components can be combined using a weighted sum with weights b q [m,n),
  • phase velocity images that can be used in various ways and applications.
  • One advantage of phase velocity images is that they can be reconstructed at a known frequency or using a known band of frequencies, which can provide standardization.
  • phase velocity changes with frequency, or dispersion.
  • the phase velocity dispersion can be fit with rheological models such as the Kelvin-Voigt, Maxwell, Generalized Maxwell, Zener, or fractional models to estimate viscoelastic parameters and storage and loss moduli.
  • material property maps e.g., viscoelastic parameter maps, mechanical parameter maps
  • These parameters provide an additional way to characterize soft tissues and other materials.
  • the phase velocity image reconstructions described in the present disclosure can be optimized using different metrics, such as the coefficient of variation (i.e., the ratio of mean to standard deviation in a region); standard deviation or variance in a region; contrast between two different regions; contrast-to-noise ratio ("CNR”) between two different regions; and resolution.
  • the processing parameters that can be altered in an iterative or guided fashion include the patch size, p ; window size, W ; the frequency or frequency band; or combinations of these parameters.
  • the optimization process can be performed in an iterative or guided process to maximize metrics (e.g., contrast, CNR) or minimize metrics (e.g., coefficient of variation, standard deviation, variance, resolution).
  • the size of a region or inclusion can be used.
  • a group velocity image and a survey of the center frequency of the shear wave motion can guide the frequency choice and size processing parameters.
  • a first phase velocity map can be generated from wave motion data acquired when a push beam excitation was generated on one side (e.g., a left side) of a volume (e.g., using the left side of an ultrasound transducer to generate the push beam excitation).
  • a second phase velocity map can then be generated from wave motion data acquired when a push beam excitation was generated on another side (e.g., a right side) of the volume (e.g., using the right side of the ultrasound transducer to generate the push beam excitation).
  • first and second phase velocity images can then be combined, such as by averaging the two images.
  • the phase velocity estimation algorithm can be a phase gradient ("PG”) algorithm.
  • PG phase gradient
  • a phase gradient calculation is used to reconstruct the phase velocity in a localized region defined as a patch, p , in units of pixels or spatial locations.
  • a one-dimensional Fourier transform is performed in the time dimension for each recording of the wave motion at each location.
  • the phase values at each location and for each frequency or set of frequencies are extracted for later use in calculating the phase gradient.
  • phase gradient [fz-fi or Df) is calculated either using the end points [x2-xi or Dc) of the patch or by performing a linear regression (e.g., linear curve fit) of all the data points within the patch to provide local estimates of phase velocity,
  • the phase velocity estimation algorithm can be a two-dimensional Fourier transform (“2DFT”) algorithm.
  • 2DFT two-dimensional Fourier transform
  • a 2D Fourier transform of spatiotemporal data, v m (xJ) is computed over a certain range encompassing p pixels.
  • Using a 2D Fourier tranform provides a representation in a spatial frequency and temporal frequency space, V[kf). For each frequency, the peaks are identified in the magnitude spectrum ⁇ V m [k,d) ⁇ . The locations of the peaks are used to calculate the phase velocity using,
  • the locations for calculating the phase velocity can also be detected for each wavenumber, k, in a similar way as done as for each frequency. These methods are further described by M. Bernal, et al., in "Material property estimation for tubes and arteries using ultrasound radiation force and analysis of propagating modes,” /. Acoust. Soc. Am., 2011; 129:1344-1354, which is herein incorporated by reference in its entirety.
  • identification of wave propagation modes can be used by using a gradient-based methodology and fitting a polynomial curve to the data points to provide a more robust estimation of the dispersion curve, as described by P. Kijanka, et al., in "Robust phase velocity dispersion estimation of viscoelastic materials used for medical applications based on the Multiple Signal Classification Method,” IEEE Trans Ultrason Ferroelectr Freq Control, 2018; 65:423-439, which is herein incorporated by reference in its entirety.
  • the phase velocity estimation algorithm can be a two-point continuous wavelet (“2P-CWT”) algorithm.
  • a wavelet such as the Morlet wavelet, g[t), is composed of a harmonic wave modulated by a Gaussian envelope where To is the central period.
  • the mother wavelet, g(t), can be dilated by crand delayed by ras a daughter wavelet,
  • the wavelet transform is defined as,
  • phase velocity can be estimated at a set of specified frequencies.
  • the continuous wavelet is calculated and a correlation is calculated to find the phase shift, Af, at a particular frequency from the unwrapped phase,
  • MUSIC Multiple Signal Classification
  • the phase velocity estimation algorithm can be a multiple signal classification (“MUSIC”) algorithm.
  • MUSIC multiple signal classification
  • phase velocities can be estimated for a segment of p pixels with dimensions x p .
  • the phase velocities can be identified using a gradient-based methodology and fitting a polynomial curve to the data points to provide a more robust estimation of the dispersion curve.
  • An example MUSIC algorithm is described in Patent Application No. PCT /US2018/036050, which is herein incorporated by reference in its entirety, and by P.
  • the phase velocity estimation algorithm can be a local phase velocity imaging (“LPVI”) algorithm.
  • LPVI local phase velocity imaging
  • FIG. 2 illustrates a flowchart setting forth the steps of an example method for phase velocity imaging using an ultrasound system and an LPVI algorithm.
  • the LPVI method can be used with or without wavenumber filtering to obtain the phase velocity images. Anderssen- Hegland averaging could be used as an additional step in the LPVI step.
  • wave motion data, v m are accessed by a computer system at step 202 and are then optionally directionally filtered, if desired or needed, into q different components at step 204.
  • wavenumber filtering may also be applied to the wave motion data, as indicated at optional process block 206.
  • Accessing the wave motion data can include retrieving wave motion data from a memory or other data storage device or medium.
  • accessing the wave motion data can include acquiring such data with an imaging system and communicating or otherwise transferring the wave motion data to a computer system, which may be a part of the imaging system.
  • the imaging system can be an ultrasound system, an optical imaging system, an MRI system, or any other imaging system suitable for imaging propagating mechanical waves (e.g., shear waves, guided waves, other mechanical waves).
  • the wave motion data may be mechanical wave motion data, such as shear wave motion data, guided wave motion data, or the like, and may be indicative of particle displacement, velocity, acceleration, or combinations thereof.
  • the wave motion data may be 3D wave motion data (e.g., two spatial dimensions, one temporal dimension) or 4D wave motion data (e.g., three spatial dimensions, one temporal dimension).
  • An LPVI algorithm is then employed at step 208 to compute or otherwise generate phase velocity data, such as by calculating the phase velocity for the different directionally filtered components.
  • the directionally filtered components of the wave motion data can be input to an LPVI algorithm, generating output as the phase velocity data.
  • This step may isolate one frequency or a set of selected frequencies (e.g., a frequency band).
  • the directionally filtered components can then be combined at step 210 to generate a phase velocity image.
  • the directionally filtered components can be combined according to Eqn. (5).
  • the LPVI algorithm operates in a "k-space” (frequency- wavenumber) domain where additional information about wave modes and wavenumber distribution can be achieved in comparison to the time-space domain.
  • k-space frequency- wavenumber
  • the transformation of the spatiotemporal particle motion v(z; x; t) to the k- space domain is accomplished using the three-dimensional (3D) Fourier transform (F 3D ),
  • V ⁇ k z ,k x ,f is the resulting 3D k-space representation in terms of the frequency and wavenumber vectors, k z and k x .
  • the frequency, f is a counterpart of the time domain whereas, wavenumbers k z and k x are the counterparts of the spatial dimensions z and x, respectively.
  • FIG. 3 a flowchart is illustrated as setting forth the steps of an example method for generating phase velocity data using an LPVI algorithm.
  • LPVI the transformation of the spatiotemporal particle motion can be performed in progressive steps.
  • the LPVI approach reconstructs local wave velocity (e.g., shear wave velocity, guided wave velocity) for a single frequency or selected frequency band.
  • the method begins with transforming the wave motion data to the frequency domain, as indicated at step 302.
  • transforming the wave motion data e.g., the spatiotemporal particle motion data to the frequency domain can be implemented using a one-dimensional Fourier transform (FID) as follows,
  • V(z,x, f) J n(z,c, ⁇ b i2p/ ⁇ ⁇ (13);
  • V(z,x,f) is the resulting particle velocity motion in the frequency-domain.
  • the spatial domains of V f ) remain unchanged.
  • the wave motion data are first transformed into the frequency-wavenumber domain (e.g., by using a 3D Fourier transform) where the resulting spectrum data are directionally filtered, band-pass filtered, or both.
  • directional filters can be applied in a 2D wavenumber domain to extract directional components, such as left-to-right (“LR”) and right-to-left (“RL”) propagating wave components, and to remove wave interferences. Wave components propagating in the axial direction can also be reduced by the directional filter using a wedge filter with a smooth roll-off in the k z direction.
  • LR left-to-right
  • RL right-to-left
  • a first order Butterworth band-pass filter can be applied to each frame to remove spatial wavelengths representing wave velocities outside of a specified range.
  • the band-pass filter can be designed to remove spatial wavelengths representing wave velocities outside a predetermined range of 0.5-1.0 m/s.
  • the limits of the band-pass filter can be selected to follow a phase velocity curve, such as a phase velocity curve that is determined with a Fourier transform (e.g., a 2DFT).
  • the limits can be selected by giving a range of +c L to the dispersion curve, 3 ⁇ 4 (/ ), such that the limits are c B (f) c L to c B (f) + c L .
  • This filtering process can be implemented as the product between the spectrum in the frequency-wavenumber domain, V(k z ,k x ,f ⁇ , of the particle motion and a filter function in the form,
  • [0059] denotes the 2D filter in the k z - k x domain and iKAJ) , is the filtered spectrum.
  • the frequency domain data are then generated by converting the filtered spectrum back into the frequency- space domain using an inverse 2D Fourier transform, resulting in V w z,x,f ) .
  • Spectrum data for a particular frequency or frequency band are then selected for processing, as indicated at step 304.
  • wavefield data can be obtained for a particular frequency, / 0 , or a set of particular frequencies.
  • a short space two-dimensional (“2D”) Fourier transform (F2D) in the spatial domain is then performed on the selected spectrum data, V(z,x, f ) or K ( z > x > f o ) , in order to generate windowed wavefield data, or V w z,x,f ⁇ ,as indicated at step 306.
  • the short space 2DFT can be applied on the set of particular frequencies to speed up the calculations using parallel calculations. In these instances, the short space 2DFT does not need to be repeated, individually, for each particular frequency in order to reconstruct multiple phase velocity images, thereby significantly reducing computational time.
  • a short space 2DFT can be repeated for each pixel in the x and z directions.
  • the short space 2D FT can also be repeated in a sparse manner for every n-th or every m-th pixel in the x and z directions, respectively.
  • missing pixels can be reconstructed using a linear interpolation, or other suitable, step.
  • a linear interpolation step can be done by applying a 2D convolution method.
  • performing a short space 2DFT on the selected spectrum data includes breaking down the V (z,c, _/ ⁇ ) wavefield into small segments over the spatial dimensions before Fourier transformation and retaining the spatial information.
  • the wavefield can be multiplied by a window function, that is non-zero for only a small region in space while constant over the entire frequency domain.
  • this windowing process can be described as,
  • the window function can be a 2D cosine-tapered window
  • Tukey window which can be defined as,
  • the ID cosine-tapered window is used for two spatial directions in a form
  • the parameter z is the retained spatial vector of the windowed part.
  • the choice of the spatial window size can be selected depending on the inclusion size that is evaluated.
  • One approach is to gradually increase the spatial window size for image reconstruction. This approach could be used to choose an optimal window size, which should be smaller than the investigated inclusion in order to avoid excessive flattening and blurring of the edges.
  • windowed wave field regions V ( z,x,f 0 ) or K ( z x fo) are generated.
  • the windowed wavefield data are then Fourier transformed along the spatial dimensions, as indicated at step 308.
  • a 2D Fourier transform is applied to the windowed wavefield data, resulting in a set of 2D wavenumber k ,k ⁇ spectra for the particular frequency, f which can be mathematically given as,
  • ID Fourier transforms can be separately applied along the spatial dimensions, z and x, which results in a set of ID wavenumber spectra for the particular frequency, f Q . This can be represented as,
  • V can also be V when directional and/or band-pass filtering are applied to the wave motion data.
  • N is the number of scanning lines in the z-direction and x- direction.
  • the resulting spectra can be stored in vectors for each direction.
  • Using separate ID Fourier transforms to generate the wavenumber spectra instead of a 2D Fourier transform has the advantage of reducing computation time.
  • the spatial distribution of the phase velocity of the wave motion for the frequency f Q is then calculated, as indicated at step 310. In those instances where the wavenumber spectra data are generated using a 2D Fourier transform, from the
  • phase velocity V ( k_ , k x , /o spectra, for all spatial parts, the phase velocity can be calculated as,
  • the spatial distribution of the phase velocity of the wave motion for the frequency, f 0 can be calculated from the wavenumber spectra, V z (k_ ,1 : 1 ⁇ 2 , f Q ) and V x (l : z N , k x , f Q ), for all spatial locations as,
  • the full field-of-view (“FOV”) of phase wave velocity image can be reconstructed, as indicated at 312.
  • the frequency f 0 can be a single frequency value or a frequency band, f band , centered about f Q : f 0— f b ,.. ..,f 0 + f t
  • the phase velocity can be defined through an equation,
  • Eqn. (28) can represent a phase wave velocity image at each location over selected frequency band.
  • phase velocity in a viscoelastic material will vary with frequency and often may plateau at higher frequencies.
  • the phase velocity for inclusions may plateau above 850 Hz.
  • a wide bandwidth of frequency may be advantageous.
  • a single acoustic radiation force push is used, which can provide waves of different frequencies depending on the medium.
  • phase velocity imaging techniques described in the present disclosure could be used to reconstruct images at multiple frequencies to determine whether a plateau in the wave velocities has been reached over the frequency range defined. If this has not been reached, different techniques can be utilized to increase the frequency content of the propagating waves (e.g., shear waves, guided waves, other mechanical waves).
  • the acoustic radiation force push beam can be altered in its spatial distribution (e.g., by changing number of elements used and/or focal location) or temporal application (e.g., by using repeated pushes at a certain frequency). Feedback between the image reconstruction and wave motion acquisition can be performed to optimize the bandwidth of the resulting wave motion in order to improve the accuracy of the measurements.
  • phase velocity imaging techniques described in the present disclosure can be used to evaluate the quality of the data and to assess the confidence of an estimation of mechanical properties from a given data acquisition.
  • Revised acquisition settings could be generated in response to this analysis and used for subsequent acquisitions within a given imaging session.
  • mechanical property maps can be generated.
  • elasticity and viscosity maps can be generated from local shear wave velocity images.
  • the KV rheological viscoelastic model can be used to generate mechanical property maps.
  • the KV model is composed of a dashpot and a spring placed in parallel.
  • the stress-strain relationship of the KV model is represented in the form,
  • the stress, ⁇ 7 is related to the strain, f , by the shear elasticity m g , the shear viscosity m 2 and the time derivative (d/dt ⁇ .
  • the shear wave velocity of the KV model can be computed as,
  • the elasticity and viscosity parameters can be locally estimated for each pixel by estimating the curve of Eqn. (30) over the frequency domain, such as by using a nonlinear least-squares problem.
  • the following nonlinear least-squares problem can be solved:
  • a computing device 450 can receive one or more types of data (e.g., ultrasound data, optical imaging data, magnetic resonance imaging data, mechanical wave data) from image source 402, which may be an ultrasound image source, such as an ultrasound imaging system; an optical imaging source; or a magnetic resonance imaging system.
  • image source 402 which may be an ultrasound image source, such as an ultrasound imaging system; an optical imaging source; or a magnetic resonance imaging system.
  • computing device 450 can execute at least a portion of a phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402.
  • the computing device 450 can execute at least a portion of a phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402.
  • the 450 can communicate information about data received from the image source 402 to a server 452 over a communication network 454, which can execute at least a portion of the phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402.
  • the server 452 can return information to the computing device 450 (and/or any other suitable computing device) indicative of an output of the phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402.
  • computing device 450 and/or server 452 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on.
  • the computing device 450 and/or server 452 can also reconstruct images from the data.
  • image source 402 can be any suitable source of image data (e.g., measurement data, images reconstructed from measurement data), such as an ultrasound imaging system, an optical imaging system, and MRI system, another computing device (e.g., a server storing image data), and so on.
  • image source 402 can be local to computing device 450.
  • image source 402 can be incorporated with computing device 450 (e.g., computing device 450 can be configured as part of a device for capturing, scanning, and/or storing images).
  • image source 402 can be connected to computing device 450 by a cable, a direct wireless link, and so on.
  • image source 402 can be located locally and/or remotely from computing device 450, and can communicate data to computing device 450 (and/or server 452) via a communication network (e.g., communication network 454).
  • communication network 454 can be any suitable communication network or combination of communication networks.
  • communication network 454 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on.
  • communication network 108 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks.
  • Communications links shown in FIG. 4 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
  • computing device 450 can include a processor 502, a display 504, one or more inputs 506, one or more communication systems 508, and/or memory 510.
  • processor 502 can be any suitable hardware processor or combination of processors, such as a central processing unit (“CPU”), a graphics processing unit (“GPU”), and so on.
  • display 504 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on.
  • inputs 506 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 508 can include any suitable hardware, firmware, and/or software for communicating information over communication network 454 and/or any other suitable communication networks.
  • communications systems 508 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 508 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 510 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 502 to present content using display 504, to communicate with server 452 via communications system(s) 508, and so on.
  • Memory 510 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 510 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 510 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 450.
  • processor 502 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 452, transmit information to server 452, and so on.
  • server 452 can include a processor 512, a display 514, one or more inputs 516, one or more communications systems 518, and/or memory 520.
  • processor 512 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • display 514 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on.
  • inputs 516 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 518 can include any suitable hardware, firmware, and/or software for communicating information over communication network 454 and/or any other suitable communication networks.
  • communications systems 518 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 518 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 520 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 512 to present content using display 514, to communicate with one or more computing devices 450, and so on.
  • Memory 520 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 520 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 520 can have encoded thereon a server program for controlling operation of server 452.
  • processor 512 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 450, receive information and/or content from one or more computing devices 450, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • image source 402 can include a processor 522, one or more image acquisition systems 524, one or more communications systems 526, and/or memory 528.
  • processor 522 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • the one or more image acquisition systems 524 are generally configured to acquire data, images, or both, and can include an ultrasound transducer, an optical imaging system, one or more radio frequency coils in an MRI system, and so on. Additionally or alternatively, in some embodiments, one or more image acquisition systems 524 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an ultrasound transducer, an optical imaging system, one or more radio frequency coils in an MRI system, and so on. In some embodiments, one or more portions of the one or more image acquisition systems 524 can be removable and/or replaceable.
  • image source 402 can include any suitable inputs and/or outputs.
  • image source 402 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on.
  • image source 402 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
  • communications systems 526 can include any suitable hardware, firmware, and/or software for communicating information to computing device 450 (and, in some embodiments, over communication network 454 and/or any other suitable communication networks).
  • communications systems 526 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 526 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, D VI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 528 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 522 to control the one or more image acquisition systems 524, and/or receive data from the one or more image acquisition systems 524; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 450; and so on.
  • Memory 528 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 528 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 528 can have encoded thereon, or otherwise stored therein, a program for controlling operation of image source 402.
  • processor 522 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images) to one or more computing devices 450, receive information and/or content from one or more computing devices 450, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • FIG. 6 illustrates an example of an ultrasound system 600 that can implement the methods described in the present disclosure.
  • the ultrasound system 600 includes a transducer array 602 that includes a plurality of separately driven transducer elements 604.
  • the transducer array 602 can include any suitable ultrasound transducer array, including linear arrays, curved arrays, phased arrays, and so on.
  • the transducer array 602 can include a ID transducer, a 1.5D transducer, a 1.75D transducer, a 2D transducer, a 3D transducer, and so on.
  • a given transducer element 604 When energized by a transmitter 606, a given transducer element 604 produces a burst of ultrasonic energy.
  • the ultrasonic energy reflected back to the transducer array 602 e.g., an echo
  • an electrical signal e.g., an echo signal
  • each transducer element 604 can be applied separately to a receiver 608 through a set of switches 610.
  • the transmitter 606, receiver 608, and switches 610 are operated under the control of a controller 612, which may include one or more processors.
  • the controller 612 can include a computer system.
  • the transmitter 606 can be programmed to transmit unfocused or focused ultrasound waves. In some configurations, the transmitter 606 can also be programmed to transmit diverged waves, spherical waves, cylindrical waves, plane waves, or combinations thereof. Furthermore, the transmitter 606 can be programmed to transmit spatially or temporally encoded pulses.
  • the receiver 608 can be programmed to implement a suitable detection sequence for the imaging task at hand.
  • the detection sequence can include one or more of line-by-line scanning, compounding plane wave imaging, synthetic aperture imaging, and compounding diverging beam imaging.
  • the transmitter 606 and the receiver 608 can be programmed to implement a high frame rate. For instance, a frame rate associated with an acquisition pulse repetition frequency ("PRF”) of at least 100 Hz can be implemented. In some configurations, the ultrasound system 600 can sample and store at least one hundred ensembles of echo signals in the temporal direction.
  • PRF acquisition pulse repetition frequency
  • the controller 612 can be programmed to implement an imaging sequence as described in the present disclosure, or as otherwise known in the art. In some embodiments, the controller 612 receives user inputs defining various factors used in the implementation of the imaging sequence.
  • a scan can be performed by setting the switches 610 to their transmit position, thereby directing the transmitter 606 to be turned on momentarily to energize transducer elements 604 during a single transmission event according to the imaging sequence.
  • the switches 610 can then be set to their receive position and the subsequent echo signals produced by the transducer elements 604 in response to one or more detected echoes are measured and applied to the receiver 608.
  • the separate echo signals from the transducer elements 604 can be combined in the receiver 608 to produce a single echo signal.
  • the echo signals are communicated to a processing unit 614, which may be implemented by a hardware processor and memory, to process echo signals or images generated from echo signals.
  • the processing unit 614 can generate phase velocity images using the methods described in the present disclosure. Images produced from the echo signals by the processing unit 614 can be displayed on a display system 616.
  • any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer readable media can be transitory or non- transitory.
  • non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., random access memory (“RAM”), flash memory, electrically programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • RAM random access memory
  • EPROM electrically programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Signal Processing (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Acoustics & Sound (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Described here are systems and methods for phase velocity imaging using an imaging system, such as an ultrasound system, an optical imaging system (e.g., an optical coherence tomography system), or a magnetic resonance imaging system. In general, systems and methods for constructing phase velocity images (e.g., 2D images, 3D images) from propagating mechanical wave motion data are described. The systems and methods described in the present disclosure operate in the frequency domain and can be implemented using a single frequency or a band of selected frequencies. If there are multiple mechanical wave sources within the field-of-view, directional filtering may be performed to separate mechanical waves propagating in different directions. The reconstructions described below can be performed for each of these directionally filtered components.

Description

PHASE VELOCITY IMAGING USING AN IMAGING SYSTEM
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application
Serial No. 62/740,730, filed on October 3, 2018, and entitled "PHASE VELOCITY IMAGING USING AN ULTRASOUND IMAGING SYSTEM,” which is herein incorporated by reference in its entirety.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[0002] This invention was made with government support under DK092255 awarded by the National Institutes of Health. The government has certain rights in the invention.
BACKGROUND
[0003] Soft tissue pathology can cause changes in mechanical properties. Many methods for measurement of soft tissue mechanical properties have been developed over the past two decades. One class of methods called shear wave elastography ("SWE”) encompasses various different implementations including supersonic shear imaging, comb-push ultrasound shear elastography, among others. These methods use acoustic radiation force to generate shear waves and ultrasound techniques to measure the shear wave motion.
[0004] These methods have been employed for imaging various tissues including liver, breast, thyroid, skeletal muscle, kidney, and prostate. Liver fibrosis staging has been a primary use of SWE techniques that has had good success. SWE has also been widely used for characterizing tumors in breast, thyroid, and prostate. In SWE applications related to cancer imaging in breast, thyroid, and liver, it has been found that inclusions are typically stiffer than the normal surrounding tissue. These methods have been used to characterize lesions as benign and malignant and have found that typically malignant lesions are stiffer than benign lesions. In clinically implemented versions of SWE a few assumptions are made about the medium. These include that the material is elastic, homogeneous, isotropic, and incompressible. In the case of imaging inhomogeneities that are finite-sized, resolution is an important factor to take into account.
[0005] Among the various methods used to implement SWE measurements, there are also numerous techniques for reconstructing the mechanical properties, particularly the shear wave velocity. Most of the methods are based on time-domain data and measuring the time-of-flight of the shear wave in a local sense. Different features of the shear wave motion are used for estimation of the shear wave velocity, including the peaks of the displacement or velocity waveforms. Additionally, cross-correlation of time signals can be used and the size of the window used can affect the shear wave velocity results.
[0006] Another way to analyze the data for estimation of shear wave velocity is using frequency-domain approaches. Typically, frequency-domain assessment of shear wave velocity has been performed with phase gradient or Fourier transform-based methods. These methods have been used to estimate phase velocity dispersion due to material viscoelasticity and geometry. Most of these measurement techniques, as implemented, use a limited spatial extent both in the lateral and axial directions. To fill that void, efforts have been made to extend these methods to create images of the phase velocity or viscoelastic properties such as the shear wave attenuation and material viscosity.
SUMMARY OF THE DISCLOSURE
[0007] The present disclosure addresses the aforementioned drawbacks by providing a method for generating a phase velocity image from mechanical wave motion data acquired with an ultrasound system. Mechanical wave motion data acquired with an ultrasound system are accessed and from which phase velocity data are generated. A phase velocity image is then generated from the phase velocity data.
[0008] It is another aspect of the present disclosure to provide a method for controlling an ultrasound system to acquire mechanical wave motion data. Optimal acquisition parameters for the ultrasound system are determined by (a) acquiring mechanical wave motion data with the ultrasound system over a range of frequency values, (b) reconstructing phase velocity images from the mechanical wave motion data, and (c) analyzing the phase velocity images to determine whether a plateau in phase velocities is present in the phase velocity images. When the plateau in phase velocities is not present in the phase velocity images, the acquisition parameters of the ultrasound system are adjusted and steps (a)-(c) are repeated using different frequency values until the plateau in phase velocities is present in the phase velocity images. The adjusted acquisition parameters are stored as the optimal acquisition parameters. The optimal acquisition parameters are associated with an optimal bandwidth of mechanical wave motion in order to improve accuracy of phase velocity measurements.
[0009] The foregoing and other aspects and advantages of the present disclosure will appear from the following description. In the description, reference is made to the accompanying drawings that form a part hereof, and in which there is shown by way of illustration a preferred embodiment. This embodiment does not necessarily represent the full scope of the invention, however, and reference is therefore made to the claims and herein for interpreting the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS [0010] FIG. 1 is a flowchart setting forth the steps of an example method for generating a phase velocity image from wave motion data acquired with an imaging system.
[0011] FIG. 2 is a flowchart setting forth the steps of an example method for generating a phase velocity image from wave motion data acquired with an imaging system and using a local phase velocity imaging ("LPVI”) algorithm.
[0012] FIG. 3 is a flowchart setting forth the steps of an example LPVI method for generating a phase velocity image from wave motion data.
[0013] FIG. 4 is a block diagram of an example system that can implement the methods described in the present disclosure.
[0014] FIG. 5 is a block diagram of example hardware components that can be implemented in the system of FIG. 4.
[0015] FIG. 6 is a block diagram of an example ultrasound imaging system that can be implemented to acquire wave motion data.
DETAILED DESCRIPTION
[0016] Described here are systems and methods for phase velocity imaging using an ultrasound or other suitable imaging system, such as an optical imaging system or a magnetic resonance imaging ("MRI”) system. In general, systems and methods for constructing phase velocity images (e.g., 2D images, 3D images) from propagating wave data are described. The wave data may be guided wave data, shear wave data, or other such mechanical wave data. The systems and methods described in the present disclosure operate in the frequency domain and can be implemented using a single frequency or a band of selected frequencies. If there are multiple acoustic radiation force pushes within the field-of-view or multiple wave sources, directional filtering may be performed to separate waves propagating in different directions. The reconstructions described below can be performed for each of these directionally filtered components.
[0017] The systems and methods described in the present disclosure enable reconstruction of phase velocity images with sufficient resolution capable of accurately depicting inclusions or other objects on the order of less than 5 mm in diameter. Previous time-domain based methods traditionally have difficulty resolving objects of that size.
[0018] In some aspects of the present disclosure, systems and methods for local phase velocity imaging ("LPVI”) are described. LPVI implements local wavenumber imaging for the purposes of creating images of phase velocity in soft tissues. For instance, local estimates of the wavenumber of propagating guided Lamb waves can be utilized. Wavenumber changes as a function of spatial position can be analyzed with these systems and methods. For instance, new wavenumber components created by abrupt wave changes at a structural discontinuity can be identified in the frequency-wavenumber spectra. LPVI is able to accurately reconstruct 2D wave velocity maps (e.g., phase velocity maps) and provide good contrast between inclusions and surrounding background tissues without artifacts.
Phase Velocity Imaging
[0019] Referring now to FIG. 1, a flowchart is illustrated as setting forth the steps of an example method for phase velocity imaging using an imaging system. The wave motion data, vm
Figure imgf000006_0001
, are accessed by a computer system at step 102 and then optionally directionally filtered, if desired or needed, into q different components at step
104. For instance, when the wave motion data are acquired using an acquisition scheme in which multiple wave sources are induced in a single acquisition (e.g., by using multiple different push beams or other wave sources), then directional filtering can advantageously reduce wave interferences due to reflection and refraction. Additionally or alternatively, wavenumber filtering may also be applied to the wave motion data, as indicated at optional process block 106.
[0020] Accessing the wave motion data can include retrieving wave motion data from a memory or other data storage device or medium. In other instances, accessing the wave motion data can include acquiring such data with an imaging system and communicating or otherwise transferring the wave motion data to a computer system, which may be a part of the imaging system. The imaging system can be an ultrasound system, an optical imaging system (e.g., an optical coherence tomography system), an MRI system (e.g., by using magnetic resonance elastography techniques), or any other imaging system suitable for imaging propagating mechanical waves (e.g., shear waves, guided waves, other mechanical waves).
[0021] The wave motion data may be mechanical wave motion data, such as shear wave motion data, guided wave motion data, or the like, and may be indicative of particle displacement, velocity, acceleration, or combinations thereof. The wave motion data may be 3D wave motion data (e.g., two spatial dimensions, one temporal dimension) or 4D wave motion data (e.g., three spatial dimensions, one temporal dimension). Although the methods are described below with respect to processing 3D wave motion data (e.g., two spatial dimensions and one temporal dimension), it will be appreciated by those skilled in the art that the methods can be readily extended to taking 4D wave motion data (e.g., three spatial dimensions, one temporal dimension) as an input.
[0022] Different phase velocity estimation algorithms can then be employed at step 108 to compute or otherwise generate phase velocity data, which may include calculating the phase velocity components in the lateral, axial, and elevation directions:
Figure imgf000007_0001
frequency or a set of selected frequencies (e.g., a frequency band). In general, increasing the selected frequency may improve the quality of the reconstructed phase velocity image, in the sense that the phase velocity can be more uniform and because the shape of the inclusion may be more accurately represented. Moreover, the magnitude of the wave velocity (e.g., shear wave velocity) increases with increasing frequency, which can result in better contrast. The spatial resolution of the reconstructed phase velocity images is determined by the spatial wavelengths, which are shorter at the higher frequencies. This is advantageous for resolving small inclusions with high contrast. In some instances, using a single frequency may not provide adequate signal-to-noise levels. Using a frequency band can preserve the general spatial wavelength, but improve the signal-to- noise.
[0023] In some implementations, a moving patch of p pixels can be used to calculate the phase velocity. The phase velocity estimation algorithm can be implemented with a hardware processor and a memory, and can include a phase gradient ("PG”) algorithm, a two-dimensional Fourier transform algorithm, a two-point continuous wavelet ("2P-CWT”) algorithm, a multiple signal classification ("MUSIC”) algorithm, an LPVI algorithm, or other suitable phase velocity estimation algorithm.
[0024] After the individual phase velocity components are estimated over a typical patch, p , a window can be used for averaging at step 110. The averaging can be
Anderssen-Hegland averaging using the methods described by R.S. Anderssen and M. Hegland in "For numerical differentiation, dimensionality can be a blessing!” Mathematics of Computation, 1999; 68(227):1121-1141, by P. Song, et al., in "Fast shear compounding using robust 2-D shear wave speed calculation and multi-directional filtering,” Ultrasound Med. Biol. , 2014; 40: 1343-1355, and in U.S. Patent No. 9,622,711, which are herein incorporated by reference in their entirety. [0025] Using this approach for 2D -processing window, the final wave velocity, V , at the center pixel,
Figure imgf000009_0001
can be given by calculating Vxand Vz with the following equations and then using these maps in Eqns. (1) and (2):
Figure imgf000009_0002
[0026] where h' and r{i,j ) are defined by:
Figure imgf000009_0004
[0027] and m is the pixel lateral dimension, n is the pixel axial dimension, and CCx and CCz are the normalized cross-correlation coefficient along the x- and z-directions.
[0028] The phase velocity components ( Vx,AH,DF,q[m,n,f ), Vz,AH,DF,q{m,n, ),
VY,AH,DF,q[m,n,f )) can be combined either in two-dimensions or three-dimensions, as indicated at step 112. As one example, the phase velocity components can be combined with the following equations:
Figure imgf000009_0003
[0029] If the wave motion data were directionally filtered, then the directionally filtered components can be combined at step 114 to generate a phase velocity image. As one example, the directionally filtered components can be combined using a weighted sum with weights bq[m,n),
Figure imgf000010_0001
[0030] The methods described above provide phase velocity images that can be used in various ways and applications. One advantage of phase velocity images is that they can be reconstructed at a known frequency or using a known band of frequencies, which can provide standardization.
[0031] In the case of viscoelastic materials, the phase velocity changes with frequency, or dispersion. The phase velocity dispersion can be fit with rheological models such as the Kelvin-Voigt, Maxwell, Generalized Maxwell, Zener, or fractional models to estimate viscoelastic parameters and storage and loss moduli. As a result, material property maps (e.g., viscoelastic parameter maps, mechanical parameter maps) can optionally be generated, as indicated at step 314. These parameters provide an additional way to characterize soft tissues and other materials.
[0032] The phase velocity image reconstructions described in the present disclosure can be optimized using different metrics, such as the coefficient of variation (i.e., the ratio of mean to standard deviation in a region); standard deviation or variance in a region; contrast between two different regions; contrast-to-noise ratio ("CNR”) between two different regions; and resolution. The processing parameters that can be altered in an iterative or guided fashion include the patch size, p ; window size, W ; the frequency or frequency band; or combinations of these parameters. The optimization process can be performed in an iterative or guided process to maximize metrics (e.g., contrast, CNR) or minimize metrics (e.g., coefficient of variation, standard deviation, variance, resolution). To initialize the optimization process, the size of a region or inclusion can be used. Additionally, a group velocity image and a survey of the center frequency of the shear wave motion can guide the frequency choice and size processing parameters.
[0033] The adaptation of various one-dimensional methods for measurement of phase velocity to two dimensions and/or three dimensions, combined with Anderssen- Hegland averaging, provides a measurement of the true wave velocity in 2D or 3D, not just the wave velocity of the laterally traveling wave. Additionally, the Anderssen- Hegland averaging can provide a level of robustness for the reconstructions.
[0034] In some implementations, the methods described in the present disclosure can be implemented on different sets of wave motion data acquired when generating shear waves in different portions of the same volume, and then combining the resulting phase velocity images. For example, a first phase velocity map can be generated from wave motion data acquired when a push beam excitation was generated on one side (e.g., a left side) of a volume (e.g., using the left side of an ultrasound transducer to generate the push beam excitation). A second phase velocity map can then be generated from wave motion data acquired when a push beam excitation was generated on another side (e.g., a right side) of the volume (e.g., using the right side of the ultrasound transducer to generate the push beam excitation). These first and second phase velocity images can then be combined, such as by averaging the two images.
Phase Gradient Phase Velocity Estimation
[0035] As noted above, in some implementations the phase velocity estimation algorithm can be a phase gradient ("PG”) algorithm. In such a method, a phase gradient calculation is used to reconstruct the phase velocity in a localized region defined as a patch, p , in units of pixels or spatial locations. To find the phase values for the phase gradient calculation, a one-dimensional Fourier transform is performed in the time dimension for each recording of the wave motion at each location. The phase values at each location and for each frequency or set of frequencies are extracted for later use in calculating the phase gradient. For each frequency of interest, the phase gradient [fz-fi or Df) is calculated either using the end points [x2-xi or Dc) of the patch or by performing a linear regression (e.g., linear curve fit) of all the data points within the patch to provide local estimates of phase velocity,
Figure imgf000012_0001
Two-dimensional Fourier Transform (2D FT)
[0036] As noted above, in some implementations the phase velocity estimation algorithm can be a two-dimensional Fourier transform ("2DFT”) algorithm. In such a method, a 2D Fourier transform of spatiotemporal data, vm(xJ), is computed over a certain range encompassing p pixels. The length in the spatial direction is XT = pxp where xP is the pixel width. Using a 2D Fourier tranform provides a representation in a spatial frequency and temporal frequency space, V[kf). For each frequency, the peaks are identified in the magnitude spectrum \ Vm[k,d) \. The locations of the peaks are used to calculate the phase velocity using,
Figure imgf000012_0002
[0037] The locations for calculating the phase velocity can also be detected for each wavenumber, k, in a similar way as done as for each frequency. These methods are further described by M. Bernal, et al., in "Material property estimation for tubes and arteries using ultrasound radiation force and analysis of propagating modes,” /. Acoust. Soc. Am., 2011; 129:1344-1354, which is herein incorporated by reference in its entirety.
[0038] When using a short segment of only p pixels, zero padding may need to be performed. Additionally, identification of wave propagation modes can be used by using a gradient-based methodology and fitting a polynomial curve to the data points to provide a more robust estimation of the dispersion curve, as described by P. Kijanka, et al., in "Robust phase velocity dispersion estimation of viscoelastic materials used for medical applications based on the Multiple Signal Classification Method,” IEEE Trans Ultrason Ferroelectr Freq Control, 2018; 65:423-439, which is herein incorporated by reference in its entirety.
Two-point Continuous Wavelet Method (2P-CWT)
[0039] As noted above, in some implementations the phase velocity estimation algorithm can be a two-point continuous wavelet ("2P-CWT”) algorithm. In such methods, a wavelet such as the Morlet wavelet, g[t), is composed of a harmonic wave modulated by a Gaussian envelope where To is the central period.
Figure imgf000013_0001
[0040] The mother wavelet, g(t), can be dilated by crand delayed by ras a daughter wavelet,
Figure imgf000013_0002
[0041] The wavelet transform is defined as,
Figure imgf000013_0003
[0042] where the denotes a complex conjugation. For a given separation of p spatial locations, the phase velocity can be estimated at a set of specified frequencies. For the two signals at the different spatial locations separated by p pixels with dimensions xP, the continuous wavelet is calculated and a correlation is calculated to find the phase shift, Af, at a particular frequency from the unwrapped phase,
Figure imgf000014_0001
[0043] Further details of such methods are described by Q. Wu, et al., in "Measurement of interstation phase velocity by wavelet transformation,” Earthquake Science, 2009; 22:425-429.
Multiple Signal Classification (MUSIC)
[0044] As noted above, in some implementations the phase velocity estimation algorithm can be a multiple signal classification ("MUSIC”) algorithm. In such a method, phase velocities can be estimated for a segment of p pixels with dimensions xp . The phase velocities can be identified using a gradient-based methodology and fitting a polynomial curve to the data points to provide a more robust estimation of the dispersion curve. An example MUSIC algorithm is described in Patent Application No. PCT /US2018/036050, which is herein incorporated by reference in its entirety, and by P. Kijanka, et al., in "Robust phase velocity dispersion estimation of viscoelastic materials used for medical applications based on the Multiple Signal Classification Method,” IEEE Trans Ultrason Ferroelectr Freq Control, 2018; 65:423-439.
Local Phase Velocity Imaging (LPVI)
[0045] As noted above, in some implementations the phase velocity estimation algorithm can be a local phase velocity imaging ("LPVI”) algorithm. In such instances, the method described with respect to FIG. 1 can be modified as shown in FIG. 2, which illustrates a flowchart setting forth the steps of an example method for phase velocity imaging using an ultrasound system and an LPVI algorithm. The LPVI method can be used with or without wavenumber filtering to obtain the phase velocity images. Anderssen- Hegland averaging could be used as an additional step in the LPVI step. [0046] In general, wave motion data, vm , are accessed by a computer system at step 202 and are then optionally directionally filtered, if desired or needed, into q different components at step 204. Additionally or alternatively, wavenumber filtering may also be applied to the wave motion data, as indicated at optional process block 206. Accessing the wave motion data can include retrieving wave motion data from a memory or other data storage device or medium. In other instances, accessing the wave motion data can include acquiring such data with an imaging system and communicating or otherwise transferring the wave motion data to a computer system, which may be a part of the imaging system. The imaging system can be an ultrasound system, an optical imaging system, an MRI system, or any other imaging system suitable for imaging propagating mechanical waves (e.g., shear waves, guided waves, other mechanical waves).
[0047] The wave motion data may be mechanical wave motion data, such as shear wave motion data, guided wave motion data, or the like, and may be indicative of particle displacement, velocity, acceleration, or combinations thereof. The wave motion data may be 3D wave motion data (e.g., two spatial dimensions, one temporal dimension) or 4D wave motion data (e.g., three spatial dimensions, one temporal dimension). Although the methods are described below with respect to processing 3D wave motion data, it will be appreciated by those skilled in the art that the methods can be readily extended to taking 4D wave motion data as an input.
[0048] An LPVI algorithm is then employed at step 208 to compute or otherwise generate phase velocity data, such as by calculating the phase velocity for the different directionally filtered components. For instance, the directionally filtered components of the wave motion data can be input to an LPVI algorithm, generating output as the phase velocity data. This step may isolate one frequency or a set of selected frequencies (e.g., a frequency band). The directionally filtered components can then be combined at step 210 to generate a phase velocity image. For instance, the directionally filtered components can be combined according to Eqn. (5).
[0049] As noted above, the LPVI algorithm operates in a "k-space” (frequency- wavenumber) domain where additional information about wave modes and wavenumber distribution can be achieved in comparison to the time-space domain. Generally, the transformation of the spatiotemporal particle motion v(z; x; t) to the k- space domain is accomplished using the three-dimensional (3D) Fourier transform (F3D),
Figure imgf000016_0001
[0050] where, V {kz,kx,f is the resulting 3D k-space representation in terms of the frequency and wavenumber vectors, kz and kx .
[0051] The frequency, f , is a counterpart of the time domain whereas, wavenumbers kz and kx are the counterparts of the spatial dimensions z and x, respectively. With the obtained spectrum data V(kz,kx,f^ , a wavenumber spectrum can be acquired for a particular frequency f0.
[0052] Referring now to FIG. 3, a flowchart is illustrated as setting forth the steps of an example method for generating phase velocity data using an LPVI algorithm. In LPVI, the transformation of the spatiotemporal particle motion
Figure imgf000016_0002
can be performed in progressive steps. The LPVI approach reconstructs local wave velocity (e.g., shear wave velocity, guided wave velocity) for a single frequency or selected frequency band.
[0053] The method begins with transforming the wave motion data to the frequency domain, as indicated at step 302. In some instances, transforming the wave motion data (e.g., the spatiotemporal particle motion data
Figure imgf000017_0001
to the frequency domain can be implemented using a one-dimensional Fourier transform (FID) as follows,
V(z,x, f) = J n(z,c, ήbi2p/ίάί (13);
[0054] where V(z,x,f) is the resulting particle velocity motion in the frequency-domain. At this step, the spatial domains of V
Figure imgf000017_0002
f ) remain unchanged.
[0055] In other instances, the wave motion data are first transformed into the frequency-wavenumber domain (e.g., by using a 3D Fourier transform) where the resulting spectrum data are directionally filtered, band-pass filtered, or both.
[0056] As one example of directional filtering, directional filters can be applied in a 2D wavenumber domain to extract directional components, such as left-to-right ("LR”) and right-to-left ("RL”) propagating wave components, and to remove wave interferences. Wave components propagating in the axial direction can also be reduced by the directional filter using a wedge filter with a smooth roll-off in the kz direction.
[0057] As one example of band-pass filtering, a first order Butterworth band-pass filter, or other suitable band-pass filter, can be applied to each frame to remove spatial wavelengths representing wave velocities outside of a specified range. For instance, the band-pass filter can be designed to remove spatial wavelengths representing wave velocities outside a predetermined range of 0.5-1.0 m/s. Alternatively, the limits of the band-pass filter can be selected to follow a phase velocity curve, such as a phase velocity curve that is determined with a Fourier transform (e.g., a 2DFT). For instance, the limits can be selected by giving a range of +cL to the dispersion curve, ¾ (/ ), such that the limits are cB (f) cL to cB (f) + cL .
[0058] This filtering process can be implemented as the product between the spectrum in the frequency-wavenumber domain, V(kz,kx,f^ , of the particle motion and a filter function in the form,
Figure imgf000018_0001
[0059]
Figure imgf000018_0002
denotes the 2D filter in the kz - kx domain and iKAJ) , is the filtered spectrum. In these implementations, the frequency domain data are then generated by converting the filtered spectrum back into the frequency- space domain using an inverse 2D Fourier transform, resulting in Vw z,x,f ) .
[0060] Spectrum data for a particular frequency or frequency band are then selected for processing, as indicated at step 304. For instance, with the acquired spectrum data V(z,x,f^j or Vw (z,x,f) , wavefield data can be obtained for a particular frequency, /0, or a set of particular frequencies.
[0061] A short space two-dimensional ("2D”) Fourier transform (F2D) in the spatial domain is then performed on the selected spectrum data, V(z,x, f ) or K (z > x >fo) , in order to generate windowed wavefield data,
Figure imgf000018_0003
or Vw z,x,f^,as indicated at step 306. When working with a set of particular frequencies, the short space 2DFT can be applied on the set of particular frequencies to speed up the calculations using parallel calculations. In these instances, the short space 2DFT does not need to be repeated, individually, for each particular frequency in order to reconstruct multiple phase velocity images, thereby significantly reducing computational time.
[0062] In some implementations, a short space 2DFT can be repeated for each pixel in the x and z directions. In some other implementations, the short space 2D FT can also be repeated in a sparse manner for every n-th or every m-th pixel in the x and z directions, respectively. When the short space 2D FT is implemented in such a sparse manner, missing pixels can be reconstructed using a linear interpolation, or other suitable, step. As an example, a linear interpolation step can be done by applying a 2D convolution method.
[0063] In general, performing a short space 2DFT on the selected spectrum data includes breaking down the V (z,c, _/^ ) wavefield into small segments over the spatial dimensions before Fourier transformation and retaining the spatial information. In one example implementation, the
Figure imgf000019_0001
wavefield can be multiplied by a window function,
Figure imgf000019_0002
that is non-zero for only a small region in space while constant over the entire frequency domain. As one example, this windowing process can be described as,
Figure imgf000019_0003
[0064] As one example, the window function can be a 2D cosine-tapered window
(i.e., Tukey window), which can be defined as,
z x) = w z) -w x) (16);
[0065] where defines a ID window applied along the generic coordinate,
Figure imgf000019_0004
x . The ID cosine-tapered window is used for two spatial directions in a form,
Figure imgf000020_0001
[0066] where z0 and h denote the center and width of the window, respectively.
The parameter z is the retained spatial vector of the windowed part. The parameter OC defines the shape of the window. Particularly, for OC = 1 the window becomes a rectangular window, while a = 0 leads to a Hanning window. The Tukey window is convenient because it provides flexibility that helps avoid losing information near the window edges. At the same time this reduces truncation distortions and leakage errors associated with a sharp rectangular window. As one example, OC = 0.25 taper width across the spatial domains can be used; however, other taper widths can also be used.
[0067] In general, increasing the spatial window size may cause a decrease in the mean phase velocity. This behavior is caused by smoothing at the edges of the inclusion. The phase velocity is smoothed to a greater extent when larger spatial windows are used. A smaller spatial window size thus gives better spatial resolution of wave velocity estimates. However, choosing a spatial window that is too small provides a noisier image in the form of blur. Thus, the choice of the spatial window size can be selected depending on the inclusion size that is evaluated. One approach is to gradually increase the spatial window size for image reconstruction. This approach could be used to choose an optimal window size, which should be smaller than the investigated inclusion in order to avoid excessive flattening and blurring of the edges.
[0068] When the spatial window slides along the spatial dimensions, windowed wave field regions, V ( z,x,f0 ) or K (z x fo) are generated. The windowed wavefield data are then Fourier transformed along the spatial dimensions, as indicated at step 308. As noted above, in some implementations, a 2D Fourier transform is applied to the windowed wavefield data, resulting in a set of 2D wavenumber k ,k^ spectra for the particular frequency, f which can be mathematically given as,
Figure imgf000021_0001
[0069] The resulting spectra V(k ,k , f ) are indexed by the locations of the window such that the spatial information is retained.
[0070] In other implementations, ID Fourier transforms can be separately applied along the spatial dimensions, z and x, which results in a set of ID wavenumber spectra for the particular frequency, fQ . This can be represented as,
Figure imgf000021_0002
[0071] for the z-direction and as,
Figure imgf000021_0003
[0072] for the x-direction. As in other instances in the present disclosure, V can also be V when directional and/or band-pass filtering are applied to the wave motion data. In Eqns. (19) and (20), N is the number of scanning lines in the z-direction and x- direction. The resulting spectra can be stored in vectors for each direction. Using separate ID Fourier transforms to generate the wavenumber spectra instead of a 2D Fourier transform has the advantage of reducing computation time. [0073] The spatial distribution of the phase velocity of the wave motion for the frequency fQ is then calculated, as indicated at step 310. In those instances where the wavenumber spectra data are generated using a 2D Fourier transform, from the
V ( k_ , kx , /o
Figure imgf000022_0001
spectra, for all spatial parts, the phase velocity can be calculated as,
Figure imgf000022_0002
[0074] where k is a wavenumber magnitude described as,
Figure imgf000022_0003
[0075] and are arguments found using,
Figure imgf000022_0004
argmax (23).
Figure imgf000022_0005
( L) {<¾A·/,)}
[0076] Alternatively, in those instances where the wavenumber spectra data are generated using separate ID Fourier transforms, the spatial distribution of the phase velocity of the wave motion for the frequency, f0 , can be calculated from the wavenumber spectra, Vz (k_ ,1 : ½ , fQ ) and Vx (l : zN, kx, fQ ), for all spatial locations as,
Figure imgf000022_0006
[0077] where here the wavenumber magnitude is computed as,
Figure imgf000022_0007
[0078] and where, k z(l:V) arg (26);
Figure imgf000022_0008
Figure imgf000023_0001
[0079] Based on Eqn. (21) or (24), the full field-of-view ("FOV”) of phase wave velocity image can be reconstructed, as indicated at 312. As noted above, the frequency f0 can be a single frequency value or a frequency band, fband , centered about fQ : f0— fb,..
Figure imgf000023_0002
..,f0 + ft When a frequency band is used, the phase velocity can be defined through an equation,
Figure imgf000023_0003
[0080] where fi (i = 1,2, ...,N is a frequency band centered about f0 . Hence,
Eqn. (28) can represent a phase wave velocity image at each location over selected frequency band.
[0081] These examples of the methods have been described with respect to inputting 3D wave motion data (e.g., two spatial dimensions, one temporal dimension). As noted above, the method can also be extended to use with 4D wave motion data (e.g., three spatial dimensions, one temporal dimension). In these instances, the processing described above can be extended to include data along the y-axis [k -axis).
[0082] As a general matter, the phase velocity in a viscoelastic material (e.g., an inclusion) will vary with frequency and often may plateau at higher frequencies. For instance, the phase velocity for inclusions may plateau above 850 Hz. Thus, for phase velocity imaging of inclusions or other viscoelastic materials, a wide bandwidth of frequency may be advantageous. In most cases, a single acoustic radiation force push is used, which can provide waves of different frequencies depending on the medium. However, to obtain accurate measurements of wave velocity or other mechanical properties, it may be necessary to excite waves at higher frequencies.
[0083] The phase velocity imaging techniques described in the present disclosure could be used to reconstruct images at multiple frequencies to determine whether a plateau in the wave velocities has been reached over the frequency range defined. If this has not been reached, different techniques can be utilized to increase the frequency content of the propagating waves (e.g., shear waves, guided waves, other mechanical waves). The acoustic radiation force push beam can be altered in its spatial distribution (e.g., by changing number of elements used and/or focal location) or temporal application (e.g., by using repeated pushes at a certain frequency). Feedback between the image reconstruction and wave motion acquisition can be performed to optimize the bandwidth of the resulting wave motion in order to improve the accuracy of the measurements. In this manner, the phase velocity imaging techniques described in the present disclosure can be used to evaluate the quality of the data and to assess the confidence of an estimation of mechanical properties from a given data acquisition. Revised acquisition settings could be generated in response to this analysis and used for subsequent acquisitions within a given imaging session.
[0084] Based on the local wave velocity images over a given frequency range, mechanical property maps can be generated. As an example, elasticity and viscosity maps can be generated from local shear wave velocity images. For instance, the KV rheological viscoelastic model can be used to generate mechanical property maps. The KV model is composed of a dashpot and a spring placed in parallel. The stress-strain relationship of the KV model is represented in the form,
Figure imgf000024_0001
[0085] where, the stress, <7, is related to the strain, f , by the shear elasticity mg, the shear viscosity m2 and the time derivative (d/dt^ . Inserting this relation into the equation of wave motion, one-dimensional Helmholtz equation for the KV model can be obtained. Then, from the complex wave vector, the shear wave velocity of the KV model can be computed as,
Figure imgf000025_0001
[0086] where p represents the density and CO is an angular frequency
(w = 2p ) .
[0087] Thus, as an example, the elasticity and viscosity parameters can be locally estimated for each pixel by estimating the curve of Eqn. (30) over the frequency domain, such as by using a nonlinear least-squares problem. For instance, the following nonlinear least-squares problem can be solved:
Figure imgf000025_0002
Example Systems
[0088] Referring now to FIG. 4, an example of a system 400 for generating phase velocity images in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown in FIG. 4, a computing device 450 can receive one or more types of data (e.g., ultrasound data, optical imaging data, magnetic resonance imaging data, mechanical wave data) from image source 402, which may be an ultrasound image source, such as an ultrasound imaging system; an optical imaging source; or a magnetic resonance imaging system. In some embodiments, computing device 450 can execute at least a portion of a phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402. [0089] Additionally or alternatively, in some embodiments, the computing device
450 can communicate information about data received from the image source 402 to a server 452 over a communication network 454, which can execute at least a portion of the phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402. In such embodiments, the server 452 can return information to the computing device 450 (and/or any other suitable computing device) indicative of an output of the phase velocity estimation system 404 to generate phase velocity images from data received from the image source 402.
[0090] In some embodiments, computing device 450 and/or server 452 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on. The computing device 450 and/or server 452 can also reconstruct images from the data.
[0091] In some embodiments, image source 402 can be any suitable source of image data (e.g., measurement data, images reconstructed from measurement data), such as an ultrasound imaging system, an optical imaging system, and MRI system, another computing device (e.g., a server storing image data), and so on. In some embodiments, image source 402 can be local to computing device 450. For example, image source 402 can be incorporated with computing device 450 (e.g., computing device 450 can be configured as part of a device for capturing, scanning, and/or storing images). As another example, image source 402 can be connected to computing device 450 by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, image source 402 can be located locally and/or remotely from computing device 450, and can communicate data to computing device 450 (and/or server 452) via a communication network (e.g., communication network 454). [0092] In some embodiments, communication network 454 can be any suitable communication network or combination of communication networks. For example, communication network 454 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on. In some embodiments, communication network 108 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown in FIG. 4 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
[0093] Referring now to FIG. 5, an example of hardware 500 that can be used to implement image source 402, computing device 450, and server 454 in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown in FIG. 5, in some embodiments, computing device 450 can include a processor 502, a display 504, one or more inputs 506, one or more communication systems 508, and/or memory 510. In some embodiments, processor 502 can be any suitable hardware processor or combination of processors, such as a central processing unit ("CPU”), a graphics processing unit ("GPU”), and so on. In some embodiments, display 504 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 506 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[0094] In some embodiments, communications systems 508 can include any suitable hardware, firmware, and/or software for communicating information over communication network 454 and/or any other suitable communication networks. For example, communications systems 508 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 508 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[0095] In some embodiments, memory 510 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 502 to present content using display 504, to communicate with server 452 via communications system(s) 508, and so on. Memory 510 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 510 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 510 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 450. In such embodiments, processor 502 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 452, transmit information to server 452, and so on.
[0096] In some embodiments, server 452 can include a processor 512, a display 514, one or more inputs 516, one or more communications systems 518, and/or memory 520. In some embodiments, processor 512 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display 514 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 516 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[0097] In some embodiments, communications systems 518 can include any suitable hardware, firmware, and/or software for communicating information over communication network 454 and/or any other suitable communication networks. For example, communications systems 518 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 518 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[0098] In some embodiments, memory 520 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 512 to present content using display 514, to communicate with one or more computing devices 450, and so on. Memory 520 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 520 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 520 can have encoded thereon a server program for controlling operation of server 452. In such embodiments, processor 512 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 450, receive information and/or content from one or more computing devices 450, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on. [0099] In some embodiments, image source 402 can include a processor 522, one or more image acquisition systems 524, one or more communications systems 526, and/or memory 528. In some embodiments, processor 522 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, the one or more image acquisition systems 524 are generally configured to acquire data, images, or both, and can include an ultrasound transducer, an optical imaging system, one or more radio frequency coils in an MRI system, and so on. Additionally or alternatively, in some embodiments, one or more image acquisition systems 524 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an ultrasound transducer, an optical imaging system, one or more radio frequency coils in an MRI system, and so on. In some embodiments, one or more portions of the one or more image acquisition systems 524 can be removable and/or replaceable.
[00100] Note that, although not shown, image source 402 can include any suitable inputs and/or outputs. For example, image source 402 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on. As another example, image source 402 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
[00101] In some embodiments, communications systems 526 can include any suitable hardware, firmware, and/or software for communicating information to computing device 450 (and, in some embodiments, over communication network 454 and/or any other suitable communication networks). For example, communications systems 526 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 526 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, D VI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00102] In some embodiments, memory 528 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 522 to control the one or more image acquisition systems 524, and/or receive data from the one or more image acquisition systems 524; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 450; and so on. Memory 528 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 528 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 528 can have encoded thereon, or otherwise stored therein, a program for controlling operation of image source 402. In such embodiments, processor 522 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images) to one or more computing devices 450, receive information and/or content from one or more computing devices 450, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
[0001] FIG. 6 illustrates an example of an ultrasound system 600 that can implement the methods described in the present disclosure. The ultrasound system 600 includes a transducer array 602 that includes a plurality of separately driven transducer elements 604. The transducer array 602 can include any suitable ultrasound transducer array, including linear arrays, curved arrays, phased arrays, and so on. Similarly, the transducer array 602 can include a ID transducer, a 1.5D transducer, a 1.75D transducer, a 2D transducer, a 3D transducer, and so on.
[0002] When energized by a transmitter 606, a given transducer element 604 produces a burst of ultrasonic energy. The ultrasonic energy reflected back to the transducer array 602 (e.g., an echo) from the object or subject under study is converted to an electrical signal (e.g., an echo signal) by each transducer element 604 and can be applied separately to a receiver 608 through a set of switches 610. The transmitter 606, receiver 608, and switches 610 are operated under the control of a controller 612, which may include one or more processors. As one example, the controller 612 can include a computer system.
[0003] The transmitter 606 can be programmed to transmit unfocused or focused ultrasound waves. In some configurations, the transmitter 606 can also be programmed to transmit diverged waves, spherical waves, cylindrical waves, plane waves, or combinations thereof. Furthermore, the transmitter 606 can be programmed to transmit spatially or temporally encoded pulses.
[0004] The receiver 608 can be programmed to implement a suitable detection sequence for the imaging task at hand. In some embodiments, the detection sequence can include one or more of line-by-line scanning, compounding plane wave imaging, synthetic aperture imaging, and compounding diverging beam imaging.
[0005] In some configurations, the transmitter 606 and the receiver 608 can be programmed to implement a high frame rate. For instance, a frame rate associated with an acquisition pulse repetition frequency ("PRF”) of at least 100 Hz can be implemented. In some configurations, the ultrasound system 600 can sample and store at least one hundred ensembles of echo signals in the temporal direction. [0006] The controller 612 can be programmed to implement an imaging sequence as described in the present disclosure, or as otherwise known in the art. In some embodiments, the controller 612 receives user inputs defining various factors used in the implementation of the imaging sequence.
[0007] A scan can be performed by setting the switches 610 to their transmit position, thereby directing the transmitter 606 to be turned on momentarily to energize transducer elements 604 during a single transmission event according to the imaging sequence. The switches 610 can then be set to their receive position and the subsequent echo signals produced by the transducer elements 604 in response to one or more detected echoes are measured and applied to the receiver 608. The separate echo signals from the transducer elements 604 can be combined in the receiver 608 to produce a single echo signal.
[00103] The echo signals are communicated to a processing unit 614, which may be implemented by a hardware processor and memory, to process echo signals or images generated from echo signals. As an example, the processing unit 614 can generate phase velocity images using the methods described in the present disclosure. Images produced from the echo signals by the processing unit 614 can be displayed on a display system 616.
[00104] In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non- transitory. For example, non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g., random access memory ("RAM”), flash memory, electrically programmable read only memory ("EPROM”), electrically erasable programmable read only memory ("EEPROM”)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
[00105] The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.

Claims

1. A method for generating a phase velocity image from mechanical wave motion data acquired with an imaging system, the steps of the method comprising:
(a) accessing with a computer system, mechanical wave motion data
acquired with an imaging system;
(b) estimating phase velocity data from the mechanical wave motion data using the computer system; and
(c) generating a phase velocity image from the phase velocity data.
2. The method as recited in claim 1, wherein step (c) includes estimating the phase velocity data by:
converting the mechanical wave motion data to frequency data by applying a Fourier transform to the mechanical wave motion data;
selecting a frequency component from the frequency data;
generating windowed wavefield data by windowing the frequency data
associated with the frequency component using a sliding window; generating wavenumber spectra data by Fourier transforming the windowed wavefield data;
generating the phase velocity data as a spatial distribution of phase velocity estimated from the wavenumber spectra data.
3. The method as recited in claim 2, wherein the mechanical wave motion data are converted to the frequency data by applying a one-dimensional Fourier transform along a temporal dimension to the mechanical wave motion data.
4. The method as recited in claim 2, wherein the mechanical wave motion data are converted to the frequency data by:
generating frequency-wavenumber data by applying a three-dimensional Fourier transform to the mechanical wave motion data;
generating filtered frequency-wavenumber data by filtering the frequency- wavenumber data; and
generating the frequency data by applying a two-dimensional Fourier transform to the filtered frequency- wavenumber data along a first and second wavenumber dimension.
5. The method as recited in claim 4, wherein filtering the frequency- wavenumber data includes applying a direction filter to the frequency-wavenumber data to separate the frequency-wavenumber data into different directional components.
6. The method as recited in claim 4, wherein filtering the frequency- wavenumber data includes applying a band-pass filter to the frequency-wavenumber data in order to remove spatial frequency data outside a predetermined spatial frequency range.
7. The method as recited in claim 2, wherein the frequency component consists of a single frequency.
8 The method as recited in claim 2, wherein the frequency component comprises a frequency band centered on a selected frequency.
9. The method as recited in claim 2, wherein a two-dimensional cosine- tapered window is used to window the frequency data associated with the frequency component.
10. The method as recited in claim 2, wherein the wavenumber spectra data are generated by applying a two-dimensional Fourier transform to the windowed wavefield data, wherein the two-dimensional Fourier transform is applied along a first and second spatial dimension.
11. The method as recited in claim 2, wherein the wavenumber spectra data comprise first wavenumber spectra data and second wavenumber spectra data, wherein the first wavenumber spectra data are generated by applying a first one dimensional Fourier transform to the windowed wavefield data along a first spatial dimension and the second wavenumber spectra data are generated by applying a second one-dimensional Fourier transform to the windowed wavefield data along a second spatial dimension.
12. The method as recited in claim 2, wherein the spatial distribution of the phase velocity is generated based in part on dividing the frequency component by a magnitude of the wavenumber spectra data.
13. The method as recited in claim 1, wherein step (b) includes generating directionally filtered component data by directionally filtering the mechanical wave motion data into different directionally filtered components and estimating phase velocity data for each directionally filtered component.
14. The method as recited in claim 13, wherein generating the phase velocity image includes combining the phase velocity data estimated for each different directionally filtered component.
15. The method as recited in claim 1, wherein the mechanical wave motion data comprise at least one of shear wave motion data or guided wave motion data.
16. The method as recited in claim 1, wherein the imaging system is one of an ultrasound system, a magnetic resonance imaging (MRI) system, or an optical imaging system.
17. The method as recited in claim 1, further comprising generating a mechanical property map from the phase velocity image.
18. The method as recited in claim 17, wherein the mechanical property map is generated based on fitting data in the phase velocity image to a model.
19. A method for controlling an ultrasound system to acquire mechanical wave motion data, the steps of the method comprising:
determining optimal acquisition parameters for an ultrasound system by: (a) acquiring mechanical wave motion data with the ultrasound system over a range of frequency values;
(b) reconstructing phase velocity images from the mechanical wave motion data;
(c) analyzing the phase velocity images to determine whether a plateau in phase velocities is present in the phase velocity images; and
(d) when the plateau in phase velocities is not present in the phase velocity images, adjusting acquisition parameters of the ultrasound system and repeating steps (a)-(c) using different frequency values until the plateau in phase velocities is present in the phase velocity images. storing the adjusted acquisition parameters as the optimal acquisition parameters, wherein the optimal acquisition parameters are associated with an optimal bandwidth of mechanical wave motion in order to improve accuracy of phase velocity measurements.
20. The method as recited in claim 19, wherein reconstructing phase velocity images from the mechanical wave motion data comprises inputting the mechanical wave motion data to a local phase velocity imaging (LPVI) algorithm, generating output as the phase velocity images, wherein the LPVI algorithm generates the phase velocity images based in part on converting the mechanical wave motion data to a frequency- wavenumber domain.
21. The method as recited in claim 20, wherein the LPVI algorithm comprises: converting the mechanical wave motion data to frequency data by applying a
Fourier transform to the mechanical wave motion data; selecting a frequency component from the frequency data;
generating windowed wavefield data by windowing the frequency data
associated with the frequency component using a sliding window; generating wavenumber spectra data by Fourier transforming the windowed wavefield data;
generating phase velocity data as a spatial distribution of phase velocity
estimated from the wavenumber spectra data; and
reconstructing the phase velocity images from the phase velocity data.
22. The method as recited in claim 21, wherein the mechanical wave motion data are converted to the frequency data by applying a one-dimensional Fourier transform along a temporal dimension to the mechanical wave motion data.
23. The method as recited in claim 21, wherein the mechanical wave motion data are converted to the frequency data by:
generating frequency-wavenumber data by applying a three-dimensional Fourier transform to the mechanical wave motion data;
generating filtered frequency-wavenumber data by filtering the frequency- wavenumber data; and
generating the frequency data by applying a two-dimensional Fourier transform to the filtered frequency- wavenumber data along a first and second wavenumber dimension.
24. The method as recited in claim 23, wherein filtering the frequency- wavenumber data includes applying a directional filter to the frequency-wavenumber data to separate the frequency-wavenumber data into different directional
components.
25. The method as recited in claim 23, wherein filtering the frequency- wavenumber data includes applying a band-pass filter to the frequency-wavenumber data in order to remove spatial frequency data outside a predetermined spatial frequency range.
26. The method as recited in claim 21, wherein the frequency component consists of a single frequency.
27. The method as recited in claim 21, wherein the frequency component comprises a frequency band centered on a selected frequency.
28. The method as recited in claim 21, wherein a two-dimensional cosine- tapered window is used to window the frequency data associated with the frequency component.
29. The method as recited in claim 21, wherein the wavenumber spectra data are generated by applying a two-dimensional Fourier transform to the windowed wavefield data, wherein the two-dimensional Fourier transform is applied along a first and second spatial dimension.
30. The method as recited in claim 21, wherein the wavenumber spectra data comprise first wavenumber spectra data and second wavenumber spectra data, wherein the first wavenumber spectra data are generated by applying a first one dimensional Fourier transform to the windowed wavefield data along a first spatial dimension and the second wavenumber spectra data are generated by applying a second one-dimensional Fourier transform to the windowed wavefield data along a second spatial dimension.
31. The method as recited in claim 21, wherein the spatial distribution of the phase velocity is generated based in part on dividing the frequency component by a magnitude of the wavenumber spectra data.
32. The method as recited in claim 21, further comprising filtering the frequency data using at least one of directional filtering or wavenumber filtering before generating the windowed wavefield data by windowing the frequency data after they have been filtered.
33. The method as recited in claim 19, wherein the mechanical wave motion data comprise at least one of shear wave motion data or guided wave motion data.
34. The method as recited in claim 19, wherein the imaging system is one of an ultrasound system, a magnetic resonance imaging (MRI) system, or an optical imaging system.
PCT/US2019/048519 2018-10-03 2019-08-28 Phase velocity imaging using an imaging system WO2020072147A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021518665A JP2022504299A (en) 2018-10-03 2019-08-28 Phase velocity imaging using an imaging system
EP19765946.9A EP3861368A1 (en) 2018-10-03 2019-08-28 Phase velocity imaging using an imaging system
US17/282,569 US20210341429A1 (en) 2018-10-03 2019-08-28 Phase Velocity Imaging Using an Imaging System

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862740730P 2018-10-03 2018-10-03
US62/740,730 2018-10-03

Publications (1)

Publication Number Publication Date
WO2020072147A1 true WO2020072147A1 (en) 2020-04-09

Family

ID=67902689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/048519 WO2020072147A1 (en) 2018-10-03 2019-08-28 Phase velocity imaging using an imaging system

Country Status (4)

Country Link
US (1) US20210341429A1 (en)
EP (1) EP3861368A1 (en)
JP (1) JP2022504299A (en)
WO (1) WO2020072147A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111551630A (en) * 2020-04-23 2020-08-18 上海大学 Damage non-wave-velocity positioning method based on space-wave-number filter
CN113281368A (en) * 2021-05-19 2021-08-20 成都鸣石峻致医疗科技有限公司 Magnetic resonance elasticity measurement method, device, computer equipment, system and storage medium
CN114185095A (en) * 2021-12-02 2022-03-15 中国石油大学(北京) Method for suppressing multiple waves of three-dimensional plane wave domain seismic data

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220036744A1 (en) * 2020-08-02 2022-02-03 Yoshikazu Yokotani System to automate a non-destructive test for stress or stress change using unmanned aerial vehicle and ultrasound

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK92255C (en) 1958-06-27 1961-11-27 Siemens Ag Coupling for determining fees for circular connections in remote writing systems with voters.
WO2015009339A1 (en) * 2013-07-19 2015-01-22 Mayo Foundation For Medical Education And Research System and method for measurement of shear wave speed from multi-directional wave fields

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006079018A2 (en) * 2005-01-19 2006-07-27 The Johns Hopkins University Method of assessing vascular reactivity using magnetic resonance imaging, applications program and media embodying same
US9700220B2 (en) * 2006-04-25 2017-07-11 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
KR102207919B1 (en) * 2013-06-18 2021-01-26 삼성전자주식회사 Method, apparatus and system for generating ultrasound
US11071524B2 (en) * 2015-12-04 2021-07-27 Canon Medical Systems Corporation Analyzing apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK92255C (en) 1958-06-27 1961-11-27 Siemens Ag Coupling for determining fees for circular connections in remote writing systems with voters.
WO2015009339A1 (en) * 2013-07-19 2015-01-22 Mayo Foundation For Medical Education And Research System and method for measurement of shear wave speed from multi-directional wave fields
US9622711B2 (en) 2013-07-19 2017-04-18 Mayo Foundation For Medical Education And Research System and method for measurement of shear wave speed from multi-directional wave fields

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
KIJANKA PIOTR ET AL: "Local Phase Velocity Based Imaging: A New Technique Used for Ultrasound Shear Wave Elastography", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 38, no. 4, 8 October 2018 (2018-10-08), pages 894 - 908, XP011717435, ISSN: 0278-0062, [retrieved on 20190401], DOI: 10.1109/TMI.2018.2874545 *
LINGYU YU ET AL: "Crack imaging and quantification in aluminum plates with guided wave wavenumber analysis methods", ULTRASONICS., vol. 62, 1 September 2015 (2015-09-01), GB, pages 203 - 212, XP055641864, ISSN: 0041-624X, DOI: 10.1016/j.ultras.2015.05.019 *
M. BERNAL ET AL.: "Material property estimation for tubes and arteries using ultrasound radiation force and analysis of propagating modes", ACOUST. SOC. AM., vol. 129, 2011, pages 1344 - 1354, XP012136385, doi:10.1121/1.3533735
P. KIJANKA ET AL.: "Robust phase velocity dispersion estimation of viscoelastic materials used for medical applications based on the Multiple Signal Classification Method", IEEE TRANS ULTRASON FERROELECTR FREQ CONTROL, vol. 65, 2018, pages 423 - 439
P. SONG ET AL.: "Fast shear compounding using robust 2-D shear wave speed calculation and multi-directional filtering", ULTRASOUND MED. BIOL., vol. 40, 2014, pages 1343 - 1355, XP055185962, doi:10.1016/j.ultrasmedbio.2013.12.026
Q. WU ET AL.: "Measurement of interstation phase velocity by wavelet transformation", EARTHQUAKE SCIENCE, vol. 22, 2009, pages 425 - 429, XP036358255, doi:10.1007/s11589-009-0425-3
R.S. ANDERSSENM. HEGLAND: "For numerical differentiation, dimensionality can be a blessing!", MATHEMATICS OF COMPUTATION, vol. 68, no. 227, 1999, pages 1121 - 1141

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111551630A (en) * 2020-04-23 2020-08-18 上海大学 Damage non-wave-velocity positioning method based on space-wave-number filter
CN113281368A (en) * 2021-05-19 2021-08-20 成都鸣石峻致医疗科技有限公司 Magnetic resonance elasticity measurement method, device, computer equipment, system and storage medium
CN113281368B (en) * 2021-05-19 2024-04-02 成都鸣石峻致科技有限公司 Magnetic resonance elasticity measurement method, device, computer equipment, system and storage medium
CN114185095A (en) * 2021-12-02 2022-03-15 中国石油大学(北京) Method for suppressing multiple waves of three-dimensional plane wave domain seismic data

Also Published As

Publication number Publication date
JP2022504299A (en) 2022-01-13
EP3861368A1 (en) 2021-08-11
US20210341429A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
US20210341429A1 (en) Phase Velocity Imaging Using an Imaging System
Kijanka et al. Local phase velocity based imaging: A new technique used for ultrasound shear wave elastography
US20190038258A1 (en) Ultrasound waveform tomography with spatial and edge regularization
US9113826B2 (en) Ultrasonic diagnosis apparatus, image processing apparatus, control method for ultrasonic diagnosis apparatus, and image processing method
US11071520B2 (en) Tissue imaging and analysis using ultrasound waveform tomography
US8211019B2 (en) Clinical apparatuses
US10743837B2 (en) Ultrasound waveform tomography method and system
US20210080573A1 (en) Quantitative Ultrasound Imaging Based on Seismic Full Waveform Inversion
CN103239258B (en) Hyperacoustic coaxial shear wave is adopted to characterize
Morin et al. Semi-blind deconvolution for resolution enhancement in ultrasound imaging
US20130204137A1 (en) Method and System for Denoising Acoustic Travel Times and Imaging a Volume of Tissue
CN110892260B (en) Shear wave viscoelastic imaging using local system identification
CN112823283A (en) Method and system for non-invasively characterizing heterogeneous media using ultrasound
Kijanka et al. Two-point frequency shift method for shear wave attenuation measurement
JP2016087220A (en) Subject information acquisition apparatus
US20200163649A1 (en) Estimating Phase Velocity Dispersion in Ultrasound Elastography Using a Multiple Signal Classification
KR20160125910A (en) Method and system for automatic estimation of shear modulus and viscosity from shear wave imaging
Smith et al. Optimum scan spacing for three-dimensional ultrasound by speckle statistics
Kijanka et al. Fast local phase velocity-based imaging: Shear wave particle velocity and displacement motion study
Kretzek et al. GPU-based 3D SAFT reconstruction including attenuation correction
Ulrich et al. Full-waveform inversion with resolution proxies for in-vivo ultrasound computed tomography
US20240074734A1 (en) Shear wave phase velocity estimation with extended bandwidth using generalized stockwell transform and slant frequency wavenumber analysis
CN114972567B (en) Medical ultrasonic CT multi-parameter image reconstruction method based on wave equation
Nightingale et al. Comparison of qualitative and quantitative acoustic radiation force based elasticity imaging methods
Xu et al. Compact reverse time migration: A real-time approach for full waveform ultrasound imaging for breast

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19765946

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021518665

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019765946

Country of ref document: EP

Effective date: 20210503