EP4687684A1 - Verbesserte ultraschallblutflussbildgebung unter verwendung von ähnlichkeitsmessungen zur erhöhung des kontrast-rausch-verhältnisses - Google Patents

Verbesserte ultraschallblutflussbildgebung unter verwendung von ähnlichkeitsmessungen zur erhöhung des kontrast-rausch-verhältnisses

Info

Publication number
EP4687684A1
EP4687684A1 EP24723262.2A EP24723262A EP4687684A1 EP 4687684 A1 EP4687684 A1 EP 4687684A1 EP 24723262 A EP24723262 A EP 24723262A EP 4687684 A1 EP4687684 A1 EP 4687684A1
Authority
EP
European Patent Office
Prior art keywords
similarity
ultrasound data
ultrasound
data
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP24723262.2A
Other languages
English (en)
French (fr)
Inventor
Chengwu HUANG
Shigao Chen
Jingke ZHANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mayo Foundation for Medical Education and Research
Mayo Clinic in Florida
Original Assignee
Mayo Foundation for Medical Education and Research
Mayo Clinic in Florida
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mayo Foundation for Medical Education and Research, Mayo Clinic in Florida filed Critical Mayo Foundation for Medical Education and Research
Publication of EP4687684A1 publication Critical patent/EP4687684A1/de
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0833Clinical applications involving detecting or locating foreign bodies or organic structures
    • A61B8/085Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0891Clinical applications for diagnosis of blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/481Diagnostic techniques involving the use of contrast agents, e.g. microbubbles introduced into the bloodstream
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5246Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
    • A61B8/5253Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode combining overlapping images, e.g. spatial compounding
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8995Combining images from different aspect angles, e.g. spatial compounding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52025Details of receivers for pulse systems
    • G01S7/52026Extracting wanted echo signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52046Techniques for image enhancement involving transmitter or receiver
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52077Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging with means for elimination of unwanted signals, e.g. noise or interference

Definitions

  • Doppler ultrasound and contrast-enhanced ultrasound imaging are common medical imaging modalities and have been broadly used to provide quantitative and/or qualitative information on tissue perfusion and blood flow hemodynamics that are clinically relevant. These imaging modalities form images of blood flow/tissue perfusion by transmitting ultrasound pulses into the target tissue and collecting backscattering ultrasound echo signals from the moving red blood cells or ultrasound contrast agents (e.g. microbubble).
  • ultrasound contrast agents e.g. microbubble
  • Ultrasound microbubbles are one of the most routinely used contrast agents, which have a size of several pm and can be administered into the bloodstream via intravenous injection to provide strong ultrasound backscattering for contrast enhancement of blood flow.
  • the recent developments of super-resolution ultrasound microvessel imaging fySRUI”) have also relied on the signal detection of the microbubble.
  • the principle of this technology is to identify individual microbubbles in the bloodstream and to utilize the center position of microbubbles to reconstruct the microvascular image with a spatial resolution much higher than the original ultrasound resolution.
  • SNR microbubble signal- to-noise ratio
  • CNR contrast-to-noise ratio
  • CR contrast ratio
  • the use of unfocused ultrasound transmission (such as plane wave imaging) in state-of-the-art ultrasound technologies is also one of the reasons for low SNR in deep tissue, which will be more challenging in clinical scanning scenarios.
  • Filter-based methods such as nonlocal mean (“NLM”) filtering can be applied to the microbubble data, but have a drawback of high computational cost.
  • NLM nonlocal mean
  • An adaptive intensity-based thresholding method can also be used, in which the microbubble signals are assumed to be stronger than the background noise. Pixels with intensity below the thresholds can then be adaptively removed based on this assumption.
  • weak microbubble signals submerging in the background noise and/or artifact can be easily rejected. Therefore, a better denoising approach for individual microbubble signal enhancement is highly warranted for robust microbubble localization and imaging.
  • the general principle of Doppler ultrasound without using contrast agents is that an ultrasound pulse is transmitted into and received from a targeted tissue containing blood flow, and this pulse transmission and receiving process is repeated a certain number of times (e.g., 16 or 32 times, this number of repetitions is referred to as packet size or ensemble size in Doppler ultrasound).
  • the received ultrasound backscattering echo signals contain signal components from the tissue and signal components from the blood flow.
  • a clutter filter or wall filter can be used to separate the tissue signal from the moving blood flow signals by analyzing this packet of received signals.
  • Blood flow images e.g., color Doppler or power Doppler images
  • one power Doppler image can be generated from one packet of data by accumulating or averaging the intensity of the blood flow signal over the temporal dimension (slow time dimension).
  • the sensitivity of Doppler ultrasound in detecting small vessels can be increased by using a longer data acquisition (i.e.. larger packet size, for example, > 100) combined with advanced clutter filters (e.g., SVD filter).
  • This high-sensitive Doppler ultrasound is also referred to as ultrasound microvessel imaging (“UMI”).
  • UMI ultrasound microvessel imaging
  • some imaging methods choose to display single frames of blood flow signal before accumulation, such as b-flow imaging, to directly and dynamically visualize the variation of ultrasound speckle of red blood cells with high temporal resolution. In this case, the SNR of the single-frame blood flow signal will be important.
  • the present disclosure addresses the aforementioned drawbacks by providing a method for increasing a contrast-to-noise ratio tyCNR’ ) of ultrasound data acquired with an ultrasound system.
  • the method includes accessing ultrasound data with a computer system, where the ultrasound data have been acquired with an ultrasound system and are contaminated with noise and/or artifact.
  • a first subset of the ultrasound data and a second subset of the ultrasound data are formed using the computer system.
  • Similarity measurement data are then generated by performing a similarity analysis between the first and second subsets of the ultrasound data using the computer system.
  • Enhanced ultrasound data are generated with the computer system by at least one of enhancing signal information or reducing noise in the ultrasound data using the similarity measurement data, where the enhanced ultrasound data have an increased CNR relative to the ultrasound data.
  • FIG. 1 is a flowchart illustrating the steps of an example method for enhancing ultrasound signal data based on a similarity analysis.
  • FIG. 2 illustrates the similarity of the ultrasound signal estimated as a blockwise normalized cross-correlation between two ultrasound frames.
  • FIG. 3 illustrates a method of temporal autocorrelation across multiple ultrasound frames for measurement of similarity to enhance ultrasound microbubble signal.
  • FIG. 4 illustrates a method of forming two series of ultrasound data based on ultrasound compounding imaging.
  • FIG. 5 illustrates a method of cross-correlation between two or more series of multi-frame ultrasound data for measurement of similarity' to enhance ultrasound microbubble signal.
  • FIG. 6 illustrates a method of similarity measurement based on precompounded ultrasound data for single-frame ultrasound microbubble signal enhancement.
  • FIG. 7 illustrates a method of similarity' measurement based on precompounded ultrasound data acquired using a row-column array transducer for single-frame ultrasound microbubble signal enhancement.
  • FIG. 8 illustrates an example of a 3D blood flow image before (FIG. 8A) and after (FIG. 8B) enhancement using the techniques according to some aspects of the present disclosure.
  • FIG. 9 illustrates the schematic of ultrasound b-flow and power Doppler or microvessel imaging, and shows an example of an ultrasound b-flow microvessel image (C) and a power Doppler microvessel image (D) derived from data captured from a human liver using ultrasound compounding imaging.
  • FIG. 10 illustrates the (A) enhanced b-flow microvessel image and the (B) enhanced power Doppler microvessel image without using an ultrasound contrast agent based on the technologies described in the present disclosure.
  • FIG. 11 illustrates an example in which the quality of a conventional ultrasound B-mode image is enhanced using the techniques described in the present disclosure.
  • FIG. 12 is a block diagram of an example ultrasound system that can implement the methods described in the present disclosure.
  • FIG. 13 is a block diagram of an example system for improving image quality of microvessel ultrasound images.
  • FIG. 14 is a block diagram of example components that can implement the system of FIG. 13.
  • Described here are systems and methods for improving the image quality in microvessel ultrasound imaging or conventional ultrasound imaging of tissue, such as by improving the contrast-to-noise ratio ("CNR”) of the contrast agent (e.g., microbubble) and/or blood flow signal, and/or tissue signal.
  • CNR contrast-to-noise ratio
  • the disclosed systems and methods can improve both contrast-enhanced and non-contrast-enhanced ultrasound imaging techniques.
  • the disclosed techniques can be applied to enhance single ultrasound frames, which can facilitate better microbubble detection, localization, and improved microvessel blood flow imaging without using a contrast agent (e g., b-flow microvessel imaging).
  • the techniques described in the present disclosure can be applied to multiple ultrasound frames to further improve the CNR of Doppler images that may be produced from multiple ultrasound frames.
  • the techniques described in the present disclosure can be applied to ultrasound B-mode frames to improve the CNR by enhancing the tissue signal and suppressing the artifacts and/or noise.
  • the disclosed techniques provide an improvement in the CNR of the contrast agent (e.g., microbubble) and/or blood flow signal in the context of blood flow imaging, and tissue signal in the context of B-mode imaging.
  • the contrast agent e.g., microbubble
  • blood flow imaging e.g., blood flow imaging
  • tissue signal in the context of B-mode imaging.
  • imaging targets may include microbubble contrast agents or red blood cells in blood flow.
  • the undesired background noise and/or artifact has high spatial or temporal variations; that is, the pixel of noise and/or artifact varies in terms of pixel intensity and/or phase among different acquisitions (i.e., different frames).
  • the true target signals are less varying and more coherent.
  • the pixel of a true target signals tends to be more similar in terms of signal intensity and phase among different acquisitions.
  • the target signal and noise/artifact can be discriminated by the similarity level. For instance, a pixel is more likely to be a true target signal with a higher similarity among a series of acquired data.
  • the similarity level can be determined by a similarity analysis including, but not limited to, computing any forms of cross-correlation, including normalized crosscorrelation; any forms of autocorrelation, including normalized autocorrelation; any forms of low-rank analysis, such as pnncipal component analysis (“PCA”), eigenvalue decomposition (‘ EVD”), and singular value decomposition ( ‘SVD’’); cross-similarity 7 ; inner product; dot product; absolute difference; Euclidean distance; or other forms of similarity measures such as distance-based similarity measures, feature-based similarity measures, probabilistic similarity measures, and/or compression-based similarity measures among a series of acquired data.
  • PCA pnncipal component analysis
  • EVD eigenvalue decomposition
  • SVD singular value decomposition
  • the target signals and/or images can then be enhanced based on the measurement of similarity.
  • pixels of high similarity can be considered real targets and can be enhanced, while pixels of poor similarity can be considered undesired noise and/or artifact and can be suppressed, leading to an improved CNR of the target signal.
  • the signal can be enhanced as a weighting process in which a weight related to the similarity level (e.g., higher similarity 7 corresponds to a higher weight value) can be applied to the original noise-contaminated data pixel-by-pixel.
  • a weight related to the similarity level e.g., higher similarity 7 corresponds to a higher weight value
  • the method includes accessing noise-contaminated ultrasound signal data, as indicated at step 102.
  • Accessing the noise-contaminated ultrasound signal data may include retrieving such data from a memory or other suitable data storage device or medium. Additionally or alternatively, accessing the noise-contaminated ultrasound signal data may include acquiring such data with an ultrasound system and transferring or otherwise communicating the data to the computer system, which may be a part of the ultrasound system.
  • the ultrasound signal data can include raw ultrasound data, images reconstructed or otherwise generated from raw ultrasound data, or combinations thereof.
  • 2D ultrasound imaging can be used as an example to facilitate description of the systems and methods described in the present disclosure. It will be appreciated, however, that the ultrasound signal data can also include data acquired with ID, 2D, 3D, or other data dimensions.
  • an ultrasound frame represents a 2D ultrasound data generated from the received ultrasound echo signals in one or multiple acquisitions.
  • a 2D ultrasound frame can be acquired using any suitable detection sequence, including line-by-line scanning, compounding plane wave imaging, synthetic aperture imaging, compounding diverging beam imaging, and so on.
  • the transmission and/or reception of ultrasound can be performed with an ultrasound system that utilizes any suitable form of ultrasound transducers, including but not limited to a linear array, curved array, ID array, 2D array, or row-column array of transducers.
  • the received ultrasound data can be in any suitable format, such as ultrasound radio frequency (“RF”) data, ultrasound in-phase quadrature (“IQ”) data, ultrasound envelope data, or the like.
  • RF ultrasound radio frequency
  • IQ ultrasound in-phase quadrature
  • the noise-contaminated ultrasound signal data are then processed by the computer system using a similarity analysis to determine the spatial-temporal similarity' of each pixel in the noise-contaminated ultrasound signal data, generated similarity measurement data as an output, as indicated at step 104.
  • the similarity analysis performed by the disclosed systems and methods allows for an increase in CNR by enhancing signals and suppressing noise.
  • the similarity analysis can take many different forms.
  • the measurement of similarity can be obtained by calculating the local spatial correlation between two ultrasound frames.
  • a temporal autocorrelation across multiple ultrasound frames can be used to determine the similarity at each pixel location.
  • a cross-correlation between two or more series of ultrasound data can be used to generate the similarity measurements.
  • a pairwise combination or combination of all of the above approaches can also be beneficially conducted to further improve similarity measurement and, therefore, further improve microbubble or blood flow signal enhancement.
  • the noise-contaminated ultrasound signal data are processed by the computer system, generating enhanced ultrasound signal data as an output, as indicated at step 106.
  • the enhanced ultrasound signal data can include ultrasound signal data in which signals have been enhanced, noise and/or artifact has been suppressed, or combinations thereof.
  • the enhanced ultrasound signal data are then presented to a user, or stored for later use and/or further processing, as indicated at step 108.
  • the similarity measurement data can be generated using a similarity analysis that is based on calculating the local spatial correlation between two ultrasound frames.
  • tissue components have been removed or suppressed by either nonlinear imaging or filtering, such as clutter filtering, wall filtering, or the like. Therefore, the given ultrasound data are assumed to primarily contain target signals (e.g., microbubble contrast agent, blood flow signal) and undesired background noise.
  • target signals e.g., microbubble contrast agent, blood flow signal
  • the similarity of the ultrasound signal can be estimated as a block-wise normalized cross-correlation between two ultrasound frames. An example of this process is illustrated in FIGS. 2A-2C.
  • the intensity of each pixel in the cross-correlation map (FIG. 2B) is the normalized cross-correlation coefficient calculated between small blocks of signals in the two ultrasound frames at the corresponding position of this pixel.
  • An example of a small block 202 is indicated in FIG. 2A by the white rectangle.
  • the size of the block 202 can be arbitrary.
  • the two correlating frames are preferably adjacent frames, but this is not a requirement.
  • this correlation map represents a map of similarity that provides information on the likelihood of a pixel being the true target, or random noise. Based on this assumption, pixels in the original ultrasound image (e.g., FIG. 2A) with a high similarity level (i.e., high cross-correlation coefficient) can be enhanced, while pixels with a low similarity- level (i.e., low cross-correlation coefficient) can be suppressed.
  • a high similarity level i.e., high cross-correlation coefficient
  • a low similarity- level i.e., low cross-correlation coefficient
  • the similarity map can be further preprocessed in any suitable way, such as smoothing. Any suitable way of signal enhancement based on the similarity information can be applied.
  • the correlation map can be converted to a weighting map applied to the original ultrasound image (e.g., the dot product between the weighting map and ultrasound image) to generate an enhanced ultrasound image.
  • the weighting map can be any function of the similarity map.
  • the weighting map can be the same as the similarity map (i.e., correlation map in this example).
  • each pixel of the weighting map can be a square of the corresponding pixel in the similarity map.
  • FIG. 2C shows an example of the enhanced ultrasound image of a microbubble obtained by applying a weighting map to the original ultrasound image (FIG. 2A), showing an improved CNR for microbubble detection.
  • the similarity measurement data can be generated using a similarity analysis that is based on a temporal autocorrelation across multiple ultrasound frames and can be used to determine the similarity at each pixel location.
  • each pixel of the similarity map was calculated from a spatial block of data.
  • a temporal correlation method can be used, which is based on multiple temporal ultrasound frames, as illustrated in FIGS. 3A-3C.
  • S x,y,t is the given multiframe ultrasound data of the target signal contaminated with noise, where x and y correspond to the spatial dimensions of the 2D image, and t is the frame number corresponding to the temporal dimension (FIG. 3 A).
  • the degree of similarity can be calculated as a normalized autocorrelation of the temporal signal S(xi, w.t) with a specific lag.
  • a normalized autocorrelation of temporal signal S(xi, j’i,t) with a lag of one frame is the normalized correlation of S(xi, y,t) with a lag-one version of itself, i.e., S(xi, yyt- 1).
  • the number of the lagging frame can be arbitrary.
  • the degree of similarity- can also be a combination (e.g., accumulation) of some of or all of the autocorrelations calculated with different lags.
  • FIG. 3B shows an example of the similarity map obtained by calculating the lag-one normalized autocorrelation
  • FIG. 3 C is an enhanced ultrasound microbubble image obtained as the original image weighted by the similarity map, showing an improved contrast to background ratio.
  • the similarity map can be preprocessed in any suitable way such as smoothing before applying to image enhancement.
  • the similarity map can be converted to a weighting map applied to the original ultrasound image.
  • the weighting map can also be any function of the similarity map.
  • the weighting map can be the same as the similarity map (i.e., the correlation map).
  • each pixel of the weighting map can be a square of the corresponding pixel in the similarity map.
  • each pixel in the similarity map can be calculated from a certain number of spatial pixels and temporal pixels based on methods described above.
  • the similarity map generated by spatial correlation analysis i.e., the spatial similarity map
  • the similarity map generated by temporal correlation analysis i.e., the temporal similarity map
  • the combination of the two similarity maps can be additive, multiplicative, or can be carried out using other forms or functions for combining the similarity maps.
  • the similarity measurement data can be generated using a cross-correlation between two or more series of ultrasound data.
  • This method generally includes forming two or more series of ultrasound data from the ultrasound acquisitions, generating the similarity measurement based on the correlation of the two or more series of ultrasound data, and then generating the enhanced ultrasound signal data using the similarity measurement data.
  • any suitable way to generate two or more series of ultrasound data can be utilized.
  • the series of ultrasound data will each have the same number of frames, though in other implementations one or more of the series may have a different number of frames than the others.
  • the odd numbered frames (e.g., frames 1, 3, 5, etc.) in the original data can be extracted as the first series of data, and the even numbered frames (e.g., 2,
  • every third frame can be selected (e.g., frames 1, 4, 7, etc., for the first series; frames 2,
  • frames may be randomly or otherwise arbitrarily assigned to the two or more series.
  • the received data corresponding to different transmitting angles can be split into two groups and coherently added to generated two series of postcompounded frames separately.
  • FIG. 4 illustrates an example of generating two series of postcompounded frames based on ultrasound compounding imaging. The way in which the received data corresponding to different transmitting angles are separated into two groups can be arbitrary.
  • the transmission of ultrasound waves by the row transducer array can be received and used for the formation of the first series of data, and the transmission of ultrasound waves by the column transducer array can be received and used for formation of the second series of data.
  • the degree of similarity can be estimated by calculating the crosscorrelation between these series of data, as illustrated in FIGS. 5A-5C.
  • Si(xjty) and S2U. y,t) are the two series of ultrasound data of the target signal contaminated with noise, where x and y correspond to the spatial dimensions of the 2D image, and t is the frame number corresponds to the temporal dimension (as shown in FIG. 5 A).
  • the degree of similarity can be calculated as a normalized cross-correlation between the temporal signal Si(xi, w. t) and S2(xi, yi. t), with or without applying a specific lag of frames:
  • T is an integer indicating the number of lagging frames between the two groups of data.
  • the degree of similarity can also be a combination (e.g., accumulation) of some of or all of the cross-correlations calculated with different lags.
  • this cross-correlation technique can be beneficially combined with the spatial correlation method described above, the temporal correlation method described above, or both, in any suitable w ay to further improve the measurement of signal similarity by leveraging both spatial and temporal information.
  • the similarity maps generated using a spatial correlation technique, a temporal correlation technique, and the crosscorrelation technique can be combined by multiplying the maps together, thereby generating an enhanced similarity map with higher noise-suppression performance.
  • the cross-correlation method can be combined with the spatial correlation method described above, such that each pixel in the similarity map can be calculated from the cross-correlation of a small block of spatial-temporal data between the two series of ultrasound frames.
  • FIG. 5B shows an example of a similarity map obtained by calculating the normalized cross-correlation between two series of ultrasound data, each containing ten temporal frames. These two series of data correspond to the received data of different transmitting angles in ultrasound compounding imaging.
  • the similarity map can be preprocessed in any suitable way such as smoothing before applying to image enhancement. With this similarity' information, the target signal can then be enhanced and the undesired background noise can be suppressed in any suitable way.
  • the similarity map can be converted to a weighting map applied to the original ultrasound image, where the true signal pixel is given a higher weight while the noise pixel is given a lower weight.
  • the weighting map can be any function of the similarity’ map.
  • the weighting map can be the same as the similarity' map.
  • each pixel of the weighting map can be a square of the corresponding pixel in the similarity map.
  • FIG. 5C shows an example of an enhanced ultrasound microbubble image generated as a result of the original image weighted by the similarity map shown in FIG. 5B. Substantial improvement in CNR can be observ ed in FIG. 5C relative to the images in FIG. 5A, which facilitates better microbubble localization and/or microvessel imaging.
  • each ultrasound frame and/or volume is a formation of multiple ultrasound acquisitions (e.g., ultrafast compounding plane imaging, synthetic aperture imaging, compounding diverging wave imaging, compounding planar wave imaging using a 2D array or row-column array transducer, or the like)
  • the measurement of signal similarity can be performed by analyzing the data from multiple ultrasound acquisitions before the final frame/volume formation.
  • each final ultrasound frame will have a similarity' map that can be used for conducting single-frame signal enhancement.
  • the final ultrasound frame (also referred to as post-compounded ultrasound frame) is formed by data acquired from multiple ultrasound transmissions at varying steering angles, as illustrated in FIG. 6A.
  • the received data can be used to beamform or generate a pre-compounded frame.
  • the precompounded frames can thus be treated as a packet of multi-frame data, with the packet size equal to the number of transmissions and/or acquisitions.
  • the signal similarity measurements described above can be conducted by analyzing this packet of pre-compounded frames.
  • a packet of pre-compounded frames of target signal contaminated with noise can be used to estimate the signal similarity based on one of the methods described above, or based on a combination of those methods as also described above.
  • a lag-one normalized autocorrelation of this packet of multi-frame precompounded data can be used to calculate the similarity.
  • FIG. 6B shows an example of a similarity map using the normalized autocorrelation calculation based on the pre-compounded data.
  • these multi-frame pre-compounded data can be separated into two series of data and a normalized cross-correlation can be used to estimate the similarity map.
  • the similarity map can be preprocessed in any suitable way such as smoothing before applying to image enhancement.
  • the pre-compounded or final postcompounded ultrasound frame can then be enhanced according to the similarity measurement, such that the target signal corresponding to a higher similarity is enhanced while the background noise corresponding to a lower similarity' is suppressed in any suitable way.
  • the similarity map can be converted to a weighting map applied to the ultrasound image to be enhanced, where the true signal pixel is given a higher weight while the noise pixel is given a lower weight.
  • the weighting map can be any function of the similarity map. In one example, the weighting map can be the same as the similarity map.
  • each pixel of the weighting map can be a square of the corresponding pixel in the similarity map.
  • An example enhanced ultrasound microbubble image is shown in FIG. 6D as a result of the post-compounded image weighted by the similarity map shown in FIG. 6B, showing substantial improvement in signal contrast and noise suppression.
  • the proposed methods can be applied to three- dimensional ultrasound imaging based on a row-column array ( ’RCA") transducer.
  • the acoustic energy density transmitted by an RCA transducer is usually low, which results in a low SNR and sensitivity in blood flow measurements.
  • RCA transducers suffer from high-level sidelobes and low CNR, particularly when transmitting unfocused beams. Therefore, the drawbacks of RCA transducer can be overcome using the similarity-based signal enhancement methods described in the present disclosure.
  • orthogonal plane wave imaging (transmit with row elements and receive with column elements, or vice versa) was used to acquire microbubble data at varying steering angles in a water tank, as illustrated in FIG. 7 A.
  • the received data for each ultrasound transmission was then beamformed to obtain a pre-compounded three-dimensional volume image.
  • a post-compounded frame of microbubble can be obtained (e.g., in the illustrated example, a three-dimensional region with 3.2-mm width in elevational direction is projected to a two-dimensional image and shown in FIG. 7B).
  • the precompounded frames can be used to estimate the signal similarity based on one of the methods described above, or based on a combination of methods described above.
  • a normalized cross-correlation map can be computed between two series of pre-compounded frames, as shown in FIG. 7C.
  • a cross-similarity map c (as shown in FIG. 7D) can be computed as:
  • m and /z 2 indicate the mean values of two series of pre-compounded frames at each spatial pixels, and oq and ⁇ J 2 indicate the corresponding standard deviations.
  • the computed similarity map can be preprocessed in any suitable way and then applied to post-compounded frame to enhance the signal contrast, and suppress the sidelobes and noise.
  • each pixel of the weighting map is the square of the corresponding pixel in the similarity map, and the enhanced ultrasound frames of microbubble are shown in FIGS. 7E and 7F.
  • the cross-similarity map calculated by the above equation shows a better contrast enhancement than the normalized cross-correlation map.
  • the similarity map itself can be used as an image with enhanced microbubble signal or microvessel features.
  • signal similarity is thus measured based on the pre-compounded data (or data of the multiple ultrasound acquisitions before the formation of the final frame), and can be applied to the corresponding post-compounded frame (or formation of final frame) for signal enhancement. Therefore, this method can be applied to a single frame of ultrasound data, thereby providing a single-frame enhancement method having improved temporal resolution.
  • the similarity measurements described in the present disclosure for use with enhancing ultrasound blood flow imaging techniques can be further improved by combining them with a temporal-domain similarity evaluation.
  • the dataset S RC (acquired by transmitting with row elements and receiving with column elements) and dataset S CR (acquired by transmitting with column elements and receiving with row elements) can be separated into multiple sub-datasets along the angular dimension, respectively.
  • the way in which the original datasets are separated into multiple subsets can be arbitrary.
  • the original dataset S RC can be split into K subsets, as follows:
  • K is the number of separated subsets, which can be an arbitrary number
  • m is the angle number corresponding to the angular dimension
  • t is the frame number corresponds to the temporal dimension
  • N m is the total number of RC transmissions.
  • a final similarity map can be generated by combining (e.g., by averaging, crosscorrelating, or other means of combination) all or part of the obtained series of cross-similarity (CS) maps (e g., the intermediate temporal cross-similarity maps).
  • CS cross-similarity
  • a normalized cross-correlation can be applied between all or part of the calculated CS maps and averaged to obtain the final spatiotemporal similarity map as:
  • FIGS. 8A and 8B show an example of the 3D microvascular blood flow image of a submandibular gland before (FIG. 8A) and after (FIG. 8B) enhancement by the similarity map obtained with this method, demonstrating the feasibility of the methods described in the present disclosure.
  • An ultrasound microbubble data is used as an example for sake of clarity to explain the techniques described above with respect to FIGS. 2-7.
  • the systems and methods described in the present disclosure can be beneficially applied to ultrasound signals from other contrast agents, or applied to ultrasound signals without using contrast agents.
  • the ultrasound signal of microflow in a small vessel (signal of red blood cells, as an example illustrated in FIG. 9B) obtained by tissue clutter filtering on the original ultrasound data (FIG. 9A) is displayed frame-by-frame at high temporal resolution, but typically with low SNR and/or CNR.
  • ultrasound power Doppler imaging or ultrasound microvessel imaging (“UMI”) typically generates one blood flow image by accumulating or averaging a packet of frames of blood flow signals (as example illustrated in FIG.
  • FIG. 9C and 9D show an example of an ultrasound b-flow microvessel image and a power Doppler microvessel image derived from data captured from a human liver using ultrasound compounding imaging.
  • the systems and methods described in the present disclosure can be applied for similarity measurement in this case, and then to perform image enhancement of the microvessel frame for either b-flow imaging or power Doppler/UMl imaging.
  • FIG. 10A shows an example of a b-flow microvessel image enhanced using the crosscorrelation methods described above, which depicts the effectiveness of the technique in noncontrast image enhancement.
  • the systems and methods described in the present disclosure can advantageously be used to enhance, or otherwise supplement, other Doppler ultrasound techniques and/or non-contrast UMI techniques.
  • the systems and methods can be used in combination with techniques for reducing noise-induced bias in ultrasound blood flow imaging, such as those described in co-pending U.S. Patent Application Serial No. 17/260,793.
  • UMI produces a microvessel image by accumulating a packet of ultrasound data with a certain number of frames (as example shown in FIG. 9D). This microvessel image can be further enhanced by the similarity information obtained using techniques in this disclosure to gain improved CNR.
  • a similarity analysis can be performed based on this full packet of ultrasound data, or parts of this packet of data, to generate a similarity measurement using one of, or a combination of the above-described methods.
  • the microvessel image (FIG. 9D for instance) can then be enhanced with improved contrast in any suitable way.
  • the similarity map can be converted to a weighting map applied to the microvessel image (FIG. 9D for instance), where the pixel of blood flow is given a higher weight while the pixel of noise is given a lower weight.
  • the weighting map can be any function of the similarity map.
  • the weighting map can be the same as the similarity map.
  • each pixel of the weighting map can be a square of the corresponding pixel in the similarity map.
  • FIG. 10B the enhanced microvessel image by the cross-correlation method described above is shown in FIG. 10B, revealing the feasibility and robustness of the techniques as applying to ultrasound microvessel imaging (ultrasound Doppler image) without using an ultrasound contrast agent.
  • B-mode ultrasound imaging displays the intensity (or brightness) of received ultrasound echoes backscattered from the targeted tissue.
  • the contrast of a B-mode image can be deteriorated byultrasound attenuation, background noise, ultrasound reverberation artifacts, phase aberration artifacts, off-axis scattering artifacts, and so on.
  • similarity measurement data generated by performing a similarity analysis can be used to enhance the CNR of these B-mode images in a similar manner as described above with respect to blood flow imaging.
  • Pixels of tissue signal can correspond to a higher level of spatial-temporal similarity, whereas pixels of noise and/or artifacts more likely relate to a lower level of spatial-temporal similarity. Therefore, the targeted tissue signal can be enhanced according to the measurement of similarity.
  • the B-mode ultrasound image of the final ultrasound frame may be formed by data acquired from multiple ultrasound transmissions at varying steering angles.
  • a similarity analysis can be performed among the pre-compounded frames to generate a spatial -temporal similarity map (as shown in FIGS. 6A-6B).
  • FIG. 11A shows a post-compounded B-mode ultrasound image captured from a human carotid artery- as an example.
  • FIG. 11B is the corresponding similarity map estimated using a normalized autocorrelation calculation among the pre-compounded data. With this similarity measurement, the B-mode image can then be enhanced with improved contrast.
  • the similarity map can be converted to a weighting map applied to the B-mode image, where pixels of tissue (e.g., arterial wall shown in FIGS. 11 A and 11C) are given a higher weight than pixels of noise and/or artifact (e.g., vessel lumen area shown in FIGS. 11A and FIG. 11C), which are given a lower weight.
  • the weighting map can be any function of the similarity map.
  • the weighting map can be the same as the similarity map, or each pixel of the w eighting map can be a square of the corresponding pixel in the similarity map.
  • FIG. 11C shows the example carotid artery 7 B-mode image after enhancement with the similarity map, revealing a great contrast enhancement between the vessel lumen and surrounding tissue as compared with FIG. 11A. which demonstrates the advantage of the disclosed systems and methods in improving CNR in conventional ultrasound imaging.
  • FIG. 11 illustrates an example of an ultrasound system 1100 that can implement the methods described in the present disclosure.
  • the ultrasound system 1100 includes a transducer array 1102 that includes a plurality of separately driven transducer elements 1104.
  • the transducer array 1102 can include any suitable ultrasound transducer array, including linear arrays, curved arrays, phased arrays, and so on.
  • the transducer array 1102 can include a ID transducer, a 1.5D transducer, a 1.75D transducer, a 2D transducer, a 3D transducer, and so on.
  • a given transducer element 1104 When energized by a transmitter 1106. a given transducer element 1104 produces a burst of ultrasonic energy. The ultrasonic energy reflected back to the transducer array 1102 (e.g., an echo) from the object or subject under study is converted to an electrical signal (e.g., an echo signal) by each transducer element 1104 and can be applied separately to a receiver 1108 through a set of switches 1110.
  • the transmitter 1106, receiver 1108, and switches 1110 are operated under the control of a controller 1112, which may include one or more processors. As one example, the controller 1112 can include a computer system.
  • the transmitter 1106 can be programmed to transmit unfocused or focused ultrasound waves. In some configurations, the transmitter 1106 can also be programmed to transmit diverged waves, spherical waves, cylindrical waves, plane waves, or combinations thereof. Furthermore, the transmitter 1106 can be programmed to transmit spatially or temporally encoded pulses.
  • the receiver 1108 can be programmed to implement a suitable detection sequence for the imaging task at hand.
  • the detection sequence can include one or more of line-by-line scanning, compounding plane wave imaging, synthetic aperture imaging, and compounding diverging beam imaging.
  • the transmitter 1106 and the receiver 1108 can be programmed to implement a high frame rate. For instance, a frame rate associated with an acquisition pulse repetition frequency (“PRF”) of at least 100 Hz can be implemented.
  • PRF acquisition pulse repetition frequency
  • the ultrasound system 1100 can sample and store at least one hundred ensembles of echo signals in the temporal direction.
  • the controller 1 112 can be programmed to design or otherwise select an imaging sequence.
  • the controller 1112 receives user inputs defining various factors used in the design of the imaging sequence.
  • a scan can be performed by setting the switches 1110 to their transmit position, thereby directing the transmitter 1106 to be turned on momentarily to energize transducer elements 1104 during a single transmission event according to the designed imaging sequence.
  • the switches 1110 can then be set to their receive position and the subsequent echo signals produced by the transducer elements 1104 in response to one or more detected echoes are measured and applied to the receiver 1108.
  • the separate echo signals from the transducer elements 1104 can be combined in the receiver 1108 to produce a single echo signal.
  • the echo signals are communicated to a processing unit 1114, which may be implemented by a hardware processor and memory, to process echo signals or images generated from echo signals.
  • the processing unit 1114 can generate enhanced ultrasound data and/or images using the methods described in the present disclosure. Images produced from the echo signals by the processing unit 1114 can be displayed on a display system 1116.
  • the computing device 1350 can communicate information about data received from the data source 1302 to a server 1352 over a communication network 1354. which can execute at least a portion of the similarity- based ultrasound image enhancement system 1304.
  • the server 1352 can return information to the computing device 1350 (and/or any other suitable computing device) indicative of an output of the similarity -based ultrasound image enhancement system 1304.
  • computing device 1350 and/or server 1352 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on.
  • the computing device 1350 and/or server 1352 can also reconstruct images from the data.
  • data source 1302 can be any suitable source of data (e.g., measurement data, images reconstructed from measurement data, processed image data), such as an ultrasound system, another computing device (e.g.. a server storing measurement data, images reconstructed from measurement data, processed image data), and so on.
  • data source 1302 can be local to computing device 1350.
  • data source 1302 can be incorporated with computing device 1350 (e.g., computing device 1350 can be configured as part of a device for measuring, recording, estimating, acquiring, or otherwise collecting or storing data).
  • data source 1302 can be connected to computing device 1350 by a cable, a direct wireless link, and so on.
  • data source 1302 can be located locally and/or remotely from computing device 1350, and can communicate data to computing device 1350 (and/or server 1352) via a communication network (e.g., communication network 1354).
  • a communication network e.g., communication network 1354
  • communication network 1354 can be any suitable communication network or combination of communication networks.
  • communication network 1354 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), other types of wireless netw ork, a wired netw ork, and so on.
  • Wi-Fi network which can include one or more wireless routers, one or more switches, etc.
  • peer-to-peer network e.g., a Bluetooth network
  • a cellular network e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.
  • CDMA Code Division Multiple Access
  • computing device 1350 can include a processor 1402, a display 1404, one or more inputs 1406, one or more communication systems 1408, and/or memory 1410.
  • processor 1402 can be any suitable hardware processor or combination of processors, such as a central processing unit (“CPU’’), a graphics processing unit (“GPU”), and so on.
  • display 1404 can include any suitable display devices, such as a liquid cry stal display (“LCD”) screen, a light-emitting diode (“LED”) display, an organic LED (“OLED”) display, an electrophoretic display (e.g., an “e-ink” display), a computer monitor, a touchscreen, a television, and so on.
  • inputs 1406 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 1408 can include any suitable hardware, firmware, and/or software for communicating information over communication network 1354 and/or any other suitable communication networks.
  • communications systems 1408 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 1408 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory' 1410 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1402 to present content using displays 1404, to communicate with server 1352 via communications system(s) 1408, and so on.
  • Memory 1410 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 1410 can include random-access memory (“RAM”), read-only memory (“ROM”), electrically programmable ROM (“EPROM”), electrically erasable ROM (“EEPROM”), other forms of volatile memory, other forms of non-volatile memory, one or more forms of semi-volatile memory', one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • RAM random-access memory
  • ROM read-only memory
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable ROM
  • other forms of volatile memory other forms of non-volatile memory
  • one or more forms of semi-volatile memory' one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 1410 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 1350.
  • processor 1402 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 1352, transmit information to server 1352, and so on.
  • content e.g., images, user interfaces, graphics, tables
  • the processor 1402 and the memory 1410 can be configured to perform the methods described herein (e.g., the method of FIG. 1).
  • server 1352 can include a processor 1412, a display 1414, one or more inputs 1416, one or more communications systems 1418, and/or memory 1420.
  • processor 1412 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • display 1414 can include any suitable display devices, such as an LCD screen, LED display, OLED display, electrophoretic display, a computer monitor, a touchscreen, a television, and so on.
  • inputs 1416 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 1418 can include any suitable hardware, firmware, and/or software for communicating information over communication network 1354 and/or any other suitable communication networks.
  • communications systems 1418 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 1418 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory' 1420 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1412 to present content using display 1414, to communicate with one or more computing devices 1350, and so on.
  • Memory 1420 can include any suitable volatile memory', non-volatile memory', storage, or any suitable combination thereof.
  • memory 1420 can include RAM, ROM. EPROM, EEPROM, other types of volatile memory, other ty pes of non-volatile memory, one or more types of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory' 1420 can have encoded thereon a server program for controlling operation of server 1352.
  • processor 1412 can execute at least a portion of the server program to transmit information and/or content (e.g.. data, images. a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • the server 1352 is configured to perform the methods described in the present disclosure.
  • the processor 1412 and memory 1420 can be configured to perform the methods described herein (e.g.. the method of FIG. 1).
  • data source 1302 can include a processor 1422, one or more data acquisition systems 1424, one or more communications systems 1426, and/or memory 1428.
  • processor 1422 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • the one or more data acquisition systems 1424 are generally configured to acquire data, images, or both, and can include an ultrasound system. Additionally or alternatively, in some embodiments, the one or more data acquisition systems 1424 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of an ultrasound system. In some embodiments, one or more portions of the data acquisition system(s) 1424 can be removable and/or replaceable.
  • data source 1302 can include any suitable inputs and/or outputs.
  • data source 1302 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on.
  • data source 1302 can include any suitable display devices, such as an LCD screen, an LED display, an OLED display, an electrophoretic display, a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
  • communications systems 1426 can include any suitable hardware, firmware, and/or software for communicating information to computing device 1350 (and, in some embodiments, over communication network 1354 and/or any other suitable communication networks).
  • communications systems 1426 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 1426 can include hardware, firmware, and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g., VGA, DVI video, USB, RS-232, etc.), Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory’ 1428 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 1422 to control the one or more data acquisition systems 1424, and/or receive data from the one or more data acquisition systems 1424; to generate images from data; present content (e.g., data, images, a user interface) using a display; communicate with one or more computing devices 1350; and so on.
  • Memory 1428 can include any suitable volatile memory, non-volatile memory. storage, or any suitable combination thereof.
  • memory 1428 can include RAM, ROM.
  • memory 1428 can have encoded thereon, or otherwise stored therein, a program for controlling operation of data source 1302.
  • processor 1422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • information and/or content e.g., data, images, a user interface
  • processor 1422 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 1350, receive information and/or content from one or more computing devices 1350, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • devices e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc
  • any suitable computer-readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer-readable media can be transitory' or non-transitory.
  • non-transitory computer-readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs. Blu-ray discs), semiconductor media (e.g., RAM, flash memory, EPROM, EEPROM), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • transitory computer-readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • a component may be, but is not limited to being, a processor device, a process being executed (or executable) by a processor device, an object, an executable, a thread of execution, a computer program, or a computer.
  • a component may be, but is not limited to being, a processor device, a process being executed (or executable) by a processor device, an object, an executable, a thread of execution, a computer program, or a computer.
  • an application running on a computer and the computer can be a component.
  • One or more components may reside within a process or thread of execution, may be localized on one computer, may be distributed between two or more computers or other processor devices, or may be included within another component (or system, module, and so on).
  • devices or systems disclosed herein can be utilized or installed using methods embodying aspects of the disclosure.
  • description herein of particular features, capabilities, or intended purposes of a device or system is generally intended to inherently include disclosure of a method of using such features for the intended purposes, a method of implementing such capabilities, and a method of installing disclosed (or otherwise known) components to support these purposes or capabilities.
  • discussion herein of any method of manufacturing or using a particular device or system, including installing the device or system is intended to inherently include disclosure, as embodiments of the disclosure, of the utilized features and implemented capabilities of such device or system.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Acoustics & Sound (AREA)
  • Vascular Medicine (AREA)
  • Hematology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)
EP24723262.2A 2023-03-30 2024-04-01 Verbesserte ultraschallblutflussbildgebung unter verwendung von ähnlichkeitsmessungen zur erhöhung des kontrast-rausch-verhältnisses Pending EP4687684A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363493243P 2023-03-30 2023-03-30
PCT/US2024/022522 WO2024207004A1 (en) 2023-03-30 2024-04-01 Enhanced ultrasound blood flow imaging using similarity measurements to increase contrast-to-noise ratio

Publications (1)

Publication Number Publication Date
EP4687684A1 true EP4687684A1 (de) 2026-02-11

Family

ID=90924865

Family Applications (1)

Application Number Title Priority Date Filing Date
EP24723262.2A Pending EP4687684A1 (de) 2023-03-30 2024-04-01 Verbesserte ultraschallblutflussbildgebung unter verwendung von ähnlichkeitsmessungen zur erhöhung des kontrast-rausch-verhältnisses

Country Status (3)

Country Link
EP (1) EP4687684A1 (de)
CN (1) CN121127182A (de)
WO (1) WO2024207004A1 (de)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11096671B2 (en) * 2015-09-10 2021-08-24 Siemens Medical Solutions Usa, Inc. Sparkle artifact detection in ultrasound color flow
WO2020018901A1 (en) * 2018-07-19 2020-01-23 Mayo Foundation For Medical Education And Research Systems and methods for removing noise-induced bias in ultrasound blood flow imaging

Also Published As

Publication number Publication date
CN121127182A (zh) 2025-12-12
WO2024207004A1 (en) 2024-10-03

Similar Documents

Publication Publication Date Title
US12437363B2 (en) Methods for high spatial and temporal resolution ultrasound imaging of microvessels
US11589840B2 (en) Methods for super-resolution ultrasound imaging of microvessels
US12539105B2 (en) Super-resolution microvessel imaging using separated subsets of ultrasound data
US12478352B2 (en) Systems and methods for removing noise-induced bias in ultrasound blood flow imaging
US8343054B1 (en) Methods and apparatus for ultrasound imaging
US20190369220A1 (en) Methods and systems for filtering ultrasound image clutter
US12223568B2 (en) Systems and methods for generating and estimating unknown and unacquired ultrasound data
WO2020081915A1 (en) Systems and methods for kalman filter-based microvessel inpainting for super-resolution imaging
FR3003154A1 (fr) Estimation de la fraction de matieres grasses en utilisant des ultrasons partir d'une propagation d'onde de cisaillement
EP3771927A1 (de) Ultraschallsystem zum erkennen eines fluidstroms in einem milieu
US20230404540A1 (en) Methods for motion tracking and correction of ultrasound ensemble
EP4687684A1 (de) Verbesserte ultraschallblutflussbildgebung unter verwendung von ähnlichkeitsmessungen zur erhöhung des kontrast-rausch-verhältnisses
US12372647B2 (en) Systems and methods for plane wave compounding in ultrasound imaging
Pustovalov Inverse problems for blood flow restoration and imaging in ultrafast ultrasound
HK40109430A (en) Methods for super-resolution ultrasound imaging of microvessels
JP2024084515A (ja) 超音波診断装置

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20251023

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR