WO2021175629A1 - Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods - Google Patents

Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods Download PDF

Info

Publication number
WO2021175629A1
WO2021175629A1 PCT/EP2021/054251 EP2021054251W WO2021175629A1 WO 2021175629 A1 WO2021175629 A1 WO 2021175629A1 EP 2021054251 W EP2021054251 W EP 2021054251W WO 2021175629 A1 WO2021175629 A1 WO 2021175629A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
adjacent
target image
dimensional
ultrasound
Prior art date
Application number
PCT/EP2021/054251
Other languages
French (fr)
Inventor
David Nigel Roundhill
Tobias Klinder
Alexander SCHMIDT-RICHBERG
Matthias LENGA
Eliza Teodora Orasanu
Cristian Lorenz
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Priority to JP2022552408A priority Critical patent/JP2023517512A/en
Priority to CN202180019152.4A priority patent/CN115243621A/en
Priority to EP21707653.8A priority patent/EP4114272A1/en
Priority to US17/908,289 priority patent/US20230054610A1/en
Publication of WO2021175629A1 publication Critical patent/WO2021175629A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/523Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for generating planar views from image data in a user selectable plane not corresponding to the acquisition plane
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8993Three dimensional imaging systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography
    • A61B8/145Echo-tomography characterised by scanning multiple planes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52053Display arrangements
    • G01S7/52057Cathode ray tube displays
    • G01S7/52073Production of cursor lines, markers or indicia by electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the present disclosure relates generally to the acquisition and processing of ultrasound images and, in particular, to systems and methods for reconstructing two-dimensional images from three-dimensional ultrasound image data.
  • Ultrasound imaging is frequently used to obtain images of internal anatomical structures of a patient.
  • Ultrasound systems typically comprise an ultrasound transducer probe that includes a transducer array coupled to a probe housing.
  • the transducer array is activated to vibrate at ultrasonic frequencies to transmit ultrasonic energy into the patient’s anatomy, and then receive ultrasonic echoes reflected or backscattered by the patient’s anatomy to create an image.
  • Such transducer arrays may include various layers, including some with piezoelectric materials, which vibrate in response to an applied voltage to produce the desired pressure waves. These transducers may be used to successively transmit and receive several ultrasonic pressure waves through the various tissues of the body. The various ultrasonic responses may be further processed by an ultrasonic imaging system to display the various structures and tissues of the body.
  • Physicians and sonographers often desire to obtain certain views of the patient’s body such that the imaging plane of the ultrasound transducer is aligned to obtain image of a particular combination and orientation of anatomical structures.
  • a sonographer obtains a variety of images or views of the fetal anatomy to perform measurements and assessments of the fetus’ development. Examples of these views include trans-ventricular, trans-cerebellar, and trans-thalamic image planes.
  • these target views are achieved manually by an experienced sonographer positioning and orienting the ultrasound transducer while watching a real-time or near real-time image stream of the field of view of the ultrasound transducer.
  • the sonographer may freeze or save the image frame to memory, and/or continue to move the probe around the target view to ensure that the desired view is actually achieved.
  • the temporal and spatial information gained in conventional two-dimensional ultrasound imaging procedures can be useful in assessing an image, as it provides important context information in addition to a dedicated two-dimensional image to be used in the biometry assessment, for example.
  • the sonographer has a spatial awareness of the location of the plane they are searching for based on their knowledge of the images of anatomy that are near or adjacent to the plane they want to capture. Intrinsically, the confidence that the target plane is correctly detected is based on the temporal context in the two-dimensional image stream.
  • some ultrasound imaging systems allow for a target view to be obtained automatically from a three-dimensional ultrasound data set.
  • Automated image reconstruction techniques can be beneficial because they depend less on the individual skills of the sonographer.
  • an ultrasound transducer array may be used to sweep out a three- dimensional volume and obtain a three-dimensional ultrasound data set representative of the volume while the sonographer holds the ultrasound probe body at a fixed position and orientation.
  • a multiplanar reconstruction (MPR) technique can be used to generate a two- dimensional image associated with a desired view.
  • MPR imaging techniques beneficially allow for less-experienced sonographers to achieve a target view
  • one drawback of MPR is that an image may not be correctly reconstructed by the system, and the user does not have the spatial/temporal context to confirm whether or not the reconstructed image is correct or optimal.
  • conventional two-dimensional ultrasound imaging provides the sonographer with real-time feedback and spatial context as the user can make manual adjustments to the probe to evaluate the imaged volume around the target view.
  • an ultrasound imaging system includes a processor circuit in communication with an ultrasound transducer configured to obtain a three-dimensional ultrasound data set.
  • the processor circuit is configured to reconstruct, from the ultrasound data set, a target image or image slice corresponding to a target view or image plane, such as an apical view of the heart, trans-cerebellar, trans-thalamic, etc.
  • the processor circuit is configured to reconstruct, from the same three-dimensional ultrasound data set, one or more adjacent images corresponding to planes that are adjacent to the target image plane.
  • the adjacent images are reconstructed such that the adjacent image and the target image correspond to image planes that are on a simulated motion path that also corresponds to the target image.
  • the simulated motion path may be representative of a linear translation, a sweeping or fanning motion, a tilt, rocking motion, and/or a rotation of the ultrasound probe, with the target image plane extending from a point along the simulated motion path.
  • the reconstructed adjacent images can be output to a display such that a user can view the adjacent images part of a sequence with the target image to obtain spatial, contextual information about the target image.
  • the user can scan through the target image and one or more adjacent images on the display as if the ultrasound transducer were being scanned along the simulated motion path.
  • the user may determine that the target image reconstructed from the image data is correctly-aligned with the target image plane, or determine that adjustments could be made to reconstruct the target image to be correctly aligned with the target image plane.
  • an ultrasound imaging apparatus includes a processor circuit configured to: receive, from an ultrasound probe communicatively coupled to the processor circuit, three-dimensional ultrasound data of an anatomy; generate, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generate, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; output, to a display in communication with the processor circuit, the target image; receive a user input representative of a direction of motion along the simulated motion path; and output, to the display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
  • the apparatus further includes the ultrasound probe.
  • the processor circuit is configured to: interpolate between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image; and output the interpolated image to the display.
  • the processor circuit is configured to: determine a direction of uncertainty relative to the target image plane; and determine the simulated motion path based on the determined direction of uncertainty.
  • the processor circuit is configured to apply a covariance matrix to the three-dimensional ultrasound data to determine the direction of uncertainty.
  • the processor circuit is configured to: identify, from the three-dimensional ultrasound data, an image plane of interest different from the target image and the adjacent image; and determine the simulated motion path based on the target image, the adjacent image, and the image of interest.
  • the processor circuit is configured to output the image of interest in response to receiving the user input.
  • the plurality of adjacent images comprises a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes.
  • the processor circuit is configured to generate the target image to exclude an anatomical feature.
  • the processor circuit is configured to generate the adjacent image to include the anatomical feature.
  • the processor circuit is further configured to output, to the display, a graphical representation of an adjacent image plane associated with the adjacent image, wherein the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
  • a method for reconstructing ultrasound images includes: receiving, three-dimensional ultrasound data of an anatomy obtained by an ultrasound probe; generating, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generating, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; outputting, to a display, the target image; receiving a user input representative of a direction of motion along the simulated motion path; and outputting, to a display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
  • generating the plurality of adjacent images comprises interpolating between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image.
  • the method further comprises outputting the interpolated image to the display.
  • generating the plurality of adjacent images comprises: determining a direction of uncertainty relative to the target image plane; and determining the simulated motion path based on the determined direction of uncertainty.
  • determining the direction of uncertainty comprises applying a covariance matrix to the three-dimensional ultrasound data.
  • the method further comprises identifying, from the three- dimensional ultrasound data, an image of interest different from the target image and the adjacent image; and determining the simulated motion path based on the target image, the adjacent image, and the image of interest.
  • the method further comprises outputting the image of interest in response to receiving the user input.
  • generating the plurality of adjacent images comprises generating a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes.
  • generating the target image comprises generating the target image to exclude an anatomical feature.
  • generating the adjacent image comprises generating the adjacent image to include the anatomical feature.
  • the method further comprises outputting, to the display, a graphical representation of an adjacent image plane associated with the adjacent image.
  • the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
  • Fig. l is a schematic diagram of an ultrasound imaging system, according to embodiments of the present disclosure.
  • FIG. 2 is a schematic diagram of a processor circuit, according to embodiments of the present disclosure.
  • Fig. 3 is a diagrammatic view of image slices reconstructed from a three- dimensional ultrasound data set of a volume, according to aspects of the present disclosure.
  • Fig. 4 is a flow diagram illustrating a method for generating and displaying contextual multiplanar image reconstructions from a three-dimensional ultrasound dataset, according to aspects of the present disclosure.
  • Fig. 5 is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated motion path, according to aspects of the present disclosure.
  • Fig. 6A is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated linear motion path, according to aspects of the present disclosure.
  • Fig. 6B is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated curved motion path, according to aspects of the present disclosure.
  • Fig. 6C is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated curved motion path, according to aspects of the present disclosure.
  • Fig. 7 is a flow diagram illustrating a method for generating interpolated image slices for a contextual multiplanar image reconstruction sequence, according to aspects of the present disclosure.
  • Fig. 8 is a flow diagram illustrating a method for reconstructing image slices based on a direction of uncertainty for a contextual multiplanar image reconstruction sequence, according to aspects of the present disclosure.
  • Fig. 9 is a screen of a graphical user interface that includes a reconstructed image frame corresponding to a target view, according to aspects of the present disclosure.
  • an ultrasound system 100 according to embodiments of the present disclosure is shown in block diagram form.
  • An ultrasound imaging apparatus or ultrasound probe 10 has a transducer array 12 comprising a plurality of ultrasound transducer elements or acoustic elements.
  • the array 12 may include any number of acoustic elements.
  • the array 12 can include between 1 acoustic element and 100000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 300 acoustic elements, 812 acoustic elements, 3000 acoustic elements, 9000 acoustic elements, 30,000 acoustic elements, 65,000 acoustic elements, and/or other values both larger and smaller.
  • the acoustic elements of the array 12 may be arranged in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a l.X dimensional array (e.g., a 1.5D array), or a two-dimensional (2D) array.
  • the array of acoustic elements e.g., one or more rows, one or more columns, and/or one or more orientations
  • the array 12 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of patient anatomy.
  • aspects of the present disclosure can be implemented in any suitable ultrasound imaging probe or system, including external ultrasound probes and intraluminal ultrasound probes.
  • aspects of the present disclosure can be implemented in ultrasound imaging systems using a mechanically-scanned external ultrasound imaging probe, an intracardiac (ICE) echocardiography catheter and/or a transesophageal echocardiography (TEE) probe, a rotational intravascular ultrasound (IVUS) imaging catheter, a phased-array IVUS imaging catheter, a transthoracic echocardiography (TTE) imaging device, or any other suitable type of ultrasound imaging device.
  • ICE intracardiac
  • TEE transesophageal echocardiography
  • IVUS rotational intravascular ultrasound
  • TTE transthoracic echocardiography
  • the acoustic elements of the array 12 may comprise one or more piezoelectric/piezoresistive elements, lead zirconate titanate (PZT), piezoelectric micromachined ultrasound transducer (PMUT) elements, capacitive micromachined ultrasound transducer (CMUT) elements, and/or any other suitable type of acoustic elements.
  • the one or more acoustic elements of the array 12 are in communication with (e.g., electrically coupled to) electronic circuitry 14.
  • the electronic circuitry 14 can comprise a microbeamformer (pBF).
  • the electronic circuitry comprises a multiplexer circuit (MUX).
  • the electronic circuitry 14 is located in the probe 10 and communicatively coupled to the transducer array 12. In some embodiments, one or more components of the electronic circuitry 14 can be positioned in the probe 10. In some embodiments, one or more components of the electronic circuitry 14, can be positioned in a computing device or processing system 28.
  • the computing device 28 may be or include a processor, such as one or more processors in communication with a memory. As described further below, the computing device 28 may include a processor circuit as illustrated in Fig. 2.
  • some components of the electronic circuitry 14 are positioned in the probe 10 and other components of the electronic circuitry 14 are positioned in the computing device 28.
  • the electronic circuitry 14 may comprise one or more electrical switches, transistors, programmable logic devices, or other electronic components configured to combine and/or continuously switch between a plurality of inputs to transmit signals from each of the plurality of inputs across one or more common communication channels.
  • the electronic circuitry 14 may be coupled to elements of the array 12 by a plurality of communication channels.
  • the electronic circuitry 14 is coupled to a cable 16, which transmits signals including ultrasound imaging data to the computing device 28. [0030] In the computing device 28, the signals are digitized and coupled to channels of a system beamformer 22, which appropriately delays each signal.
  • System beamformers may comprise electronic hardware components, hardware controlled by software, or a microprocessor executing beamforming algorithms.
  • the beamformer 22 may be referenced as electronic circuitry.
  • the beamformer 22 can be a system beamformer, such as the system beamformer 22 of Fig. 1, or it may be a beamformer implemented by circuitry within the ultrasound probe 10.
  • the system beamformer 22 works in conjunction with a microbeamformer (e.g., electronic circuitry 14) disposed within the probe 10.
  • the beamformer 22 can be an analog beamformer in some embodiments, or a digital beamformer in some embodiments.
  • the system includes A/D converters which convert analog signals from the array 12 into sampled digital echo data.
  • the beamformer 22 generally will include one or more microprocessors, shift registers, and or digital or analog memories to process the echo data into coherent echo signal data. Delays are effected by various means such as by the time of sampling of received signals, the write/read interval of data temporarily stored in memory, or by the length or clock rate of a shift register as described in U.S. Patent No. 4,173,007 to McKeighen et ah, the entirety of which is hereby incorporated by reference herein. Additionally, in some embodiments, the beamformer can apply appropriate weight to each of the signals generated by the array 12.
  • the beamformed signals from the image field are processed by a signal and image processor 24 to produce 2D or 3D images for display on an image display 30.
  • the signal and image processor 24 may comprise electronic hardware components, hardware controlled by software, or a microprocessor executing image processing algorithms. It generally will also include specialized hardware or software which processes received echo data into image data for images of a desired display format such as a scan converter.
  • beamforming functions can be divided between different beamforming components.
  • the system 100 can include a microbeamformer located within the probe 10 and in communication with the system beamformer 22. The microbeamformer may perform preliminary beamforming and/or signal processing that can reduce the number of communication channels required to transmit the receive signals to the computing device 28. [0031] Control of ultrasound system parameters such as scanning mode (e.g., B-mode,
  • the M-mode M-mode
  • probe selection is done under control of a system controller 26 which is coupled to various modules of the system 100.
  • the system controller 26 may be formed by application specific integrated circuits (ASICs) or microprocessor circuitry and software data storage devices such as RAMs, ROMs, or disk drives.
  • ASICs application specific integrated circuits
  • microprocessor circuitry may be provided to the electronic circuitry 14 from the computing device 28 over the cable 16, conditioning the electronic circuitry 14 for operation of the array as required for the particular scanning procedure.
  • the user inputs these operating parameters by means of a user interface device 20.
  • the image processor 24 is configured to generate images of different modes to be further analyzed or output to the display 30.
  • the image processor can be configured to compile a B-mode image, such as a live B-mode image, of an anatomy of the patient.
  • the image processor 24 is configured to generate or compile an M-mode image.
  • An M-mode image can be described as an image showing temporal changes in the imaged anatomy along a single scan line.
  • the computing device 28 may comprise hardware circuitry, such as a computer processor, application-specific integrated circuit (ASIC), field- programmable gate array (FPGA), capacitors, resistors, and/or other electronic devices, software, or a combination of hardware and software.
  • ASIC application-specific integrated circuit
  • FPGA field- programmable gate array
  • the computing device 28 is a single computing device. In other embodiments, the computing device 28 comprises separate computer devices in communication with one another.
  • Fig. 2 is a schematic diagram of a processor circuit 150, according to embodiments of the present disclosure.
  • the processor circuit 150 may be implemented in the computing device 28, the signal and image processor 24, the controller 26, and/or the probe 10 of Fig. 1.
  • the processor circuit 150 may include a processor 160, a memory 164, and a communication module 168. These elements may be in direct or indirect communication with each other, for example via one or more buses.
  • the processor 160 may include a central processing unit (CPU), a digital signal processor (DSP), an ASIC, a controller, an FPGA, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • the processor 160 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the memory 164 may include a cache memory (e.g., a cache memory of the processor 160), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory.
  • the memory 164 includes a non-transitory computer-readable medium.
  • the memory 164 may store instructions 166.
  • the instructions 166 may include instructions that, when executed by the processor 160, cause the processor 160 to perform the operations described herein with reference to the processor 28 and/or the probe 10 (Fig. 1). Instructions 166 may also be referred to as code.
  • the terms “instructions” and “code” should be interpreted broadly to include any type of computer- readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.
  • the communication module 168 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor 28, the probe 10, and/or the display 30.
  • the communication module 168 can be an input/output (I/O) device.
  • the communication module 168 facilitates direct or indirect communication between various elements of the processor circuit 150 and/or the processing system 106 (Fig. 1A).
  • an ultrasound imaging system comprises an ultrasound transducer comprising an array of one or more transducer elements, where the ultrasound transducer is configured to scan or sweep through a volume of the patient to obtain a three-dimensional ultrasound data set of the volume.
  • This three-dimensional ultrasound data can be used to generate three-dimensional images or models of the anatomy.
  • two-dimensional images corresponding to a variety of imaging planes that intersect the volume can be reconstructed from the three-dimensional ultrasound data.
  • Fig. 3 shows a plurality of image slices 210, 220 forming a three- dimensional ultrasound data set of a volume 200.
  • the slices 210, 220 are obtained as part of an image sequence using an ultrasound transducer array 12.
  • the array 12 comprises a two-dimensional array of ultrasound transducer elements that can be controlled as a phased array to electronically steer ultrasonic beams of acoustic energy to sweep out the image slices 210, 220, in a plurality of corresponding image planes.
  • the ultrasound transducer array 12 may comprise a single ultrasound transducer element, or a one dimensional array of transducer elements in which the transducer 12 manually scans in one or more degrees of freedom to acquire the three-dimensional data set of the volume 200.
  • the image slices 210, 220 may correspond to image planes of an image sequence performed by the ultrasound transducer.
  • the three-dimensional data set of the volume 200 may be formed by acquiring ultrasound data in a step-wise fashion where an image slice is obtained by steering or sweeping the ultrasound beams or scan lines across the corresponding image plane, and combining the plurality of different image slices 210, 220 to form the three-dimensional data set.
  • the slices 210, 220 may be combined into a three-dimensional image by determining interpolated intensity values from the different slices 210, 220, such that an intensity value can be determined for any location (e.g., in a Cartesian coordinate system) in the three-dimensional image.
  • the reconstructed image 230 in contrast to the image slices 210, 220, does not directly correspond to the two-dimensional segments of the three-dimensional imaging sequence, and does not represent an image plane that intersects the ultrasound transducer 12.
  • the reconstructed image 230 may represent an image plane passing through the volume if the ultrasound transducer 12 had been moved to a different position and orientation relative to the volume 200.
  • an image can be reconstructed from the three-dimensional data set that is representative of an imaging plane that does intersect the ultrasound transducer 12.
  • multiplanar reconstruction can be used to reconstruct, from the three-dimensional ultrasound data, a target image corresponding to a target view or imaging plane of the patient (e.g., trans-ventricular, trans-cerebellar, trans-thalamic, apical view, etc.).
  • MPR may be used to generate the reconstructed image 230 shown in Fig. 3.
  • MPR can be performed using input from various image processing techniques, including artificial intelligence (A.I.), machine learning, and/or deep learning techniques, to identify a two-dimensional cross section in the three-dimensional ultrasound data that includes a combination and/or arrangement of anatomical structures associated with a particular view.
  • A.I. artificial intelligence
  • machine learning machine learning
  • deep learning techniques deep learning techniques
  • PE plane estimation
  • PE techniques may have some drawbacks.
  • the sonographer may not be able to easily confirm that an image generated using PE and displayed as an MPR is the correct or optimal view.
  • the sonographer may lack spatial context that would otherwise be gained if the sonographer was manually scanning the probe across the anatomy around the target view. Thus, confidence that the correct image has been acquired may be diminished.
  • an incorrect or suboptimal image plane may be selected by the PE algorithm, and the sonographer may not be able to easily identify what adjustments are to be made to more completely achieve a desired view.
  • the present disclosure provides devices, systems, and associated methods for providing contextual visualizations associated with automatically reconstructed two- dimensional images, such as images generated from PE.
  • the present disclosure provides for automated image reconstruction techniques that allow for a target image to be generated and displayed, while still providing spatial contextual information to the sonographer so that the sonographer can confirm that the correct image plane has been chosen to automatically reconstruct the target image.
  • the sonographer places an ultrasound probe configured to obtain three-dimensional ultrasound data on the skin of the patient at a location generally associated with the target image plane. A three- dimensional data set of a volume of the patient is acquired, and the sonographer can set the probe down.
  • a processing system and/or processor circuit analyzes the three-dimensional ultrasound data using, for example, PE and/or MPR, to identify, reconstruct, or otherwise generate a target image associated with a target image plane or view, such as the trans-cerebellar view.
  • the target image can then be displayed using MPR to the user, which may allow the user to make measurements or assessments of the anatomy in the target image.
  • the processor circuit is configured to provide spatial context for the target image by generating a plurality of adjacent images that are associated with imaging planes adjacent to the target image plane.
  • the adjacent images are generated such that the target image and the adjacent images all correspond to a common simulated motion path or trajectory.
  • the simulated motion path is computed so that scanning through the adjacent images and the target image simulates physical scanning or pacing of the ultrasound probe around the target image plane.
  • the sonographer may scan through the adjacent images along the simulated motion path using a keyboard, track ball, or other interface device as though the user was manually moving the probe around the target image plane. Accordingly, the sonographer may have increased confidence that the processor circuit has correctly and/or optimally reconstructed the target image. Further, by scanning through the reconstructed adjacent image, the sonographer may be able to determine whether the target image reconstruction should be revised or redone.
  • embodiments of the present disclosure provide for an experience of seeing two-dimensional images that are proximate or near to a target plane.
  • the two- dimensional images can be created or reconstructed from a three-dimensional data set, thereby recreating a traditional experience in order to improve the sonographer’ s confidence that the desired anatomically planes were correctly detected by the automated procedure.
  • the systems and methods described herein enable the simulation of an ultrasound probe movement which allows a smooth transition from one automatically detected plane to another automatically detected plane.
  • These smooth image transitions emulate the transducer movements, which are familiar to the sonographer from a standard two-dimensional ultrasound exam. Smooth transitions between different view planes can be achieved, for example, by interpolating the plane parameters (normal and offset vectors) and thus obtaining continuous spline trajectories for two-dimensional plane displacement in three-dimensional space.
  • the sequence of image frames shown to the user may be a compilation of adjacent MPRs determined based on geometric spacing.
  • the sequence of images or image frames shown includes a selection of geometrically adjacent images that have been selected, e.g., by an A. I. algorithm, to be somewhat similar to the target image plane.
  • the proposed dynamic representation can be used to perform interpolation between respective image plane parameters. These interpolated images can be used to dynamically transition from one selected plane to the other, thus performing a transition through the three-dimensional volume in which the planes are located.
  • adjacent image frames correspond to parallel planes to the target plane.
  • a set of adjacent image frames can be created by interpolating between the planes of interest. For this purpose, interpolation can be done in a fixed order, e.g., always going from TT to TV, or by finding the two closest planes.
  • an A.I. -based approach for plane detection provides a way of measuring uncertainty such that a direction of highest uncertainty may be identified. Images may be interpolated along the direction of highest uncertainty. This would leverage an efficient manual correction of automatically determined images.
  • the adjacent images can be generated such that at least one adjacent image includes of the anatomical feature to be excluded, the other adjacent images are generated to show the anatomical feature disappearing from the field of view (e.g. the cerebellum in the TV plane of the fetal brain).
  • the adjacent image set can be displayed with a schematic anatomical view, since the three-dimensional volume together with the estimated plane geometries and located anatomical objects provides sufficient information for the registration to a three-dimensional anatomical model.
  • Fig. 4 is a flow chart illustrating a method 300 for providing spatial context for automated ultrasound image plane reconstructions.
  • a processor circuit receives, from an ultrasound probe communicatively coupled to the processor circuit, three- dimensional ultrasound data of an anatomy of a patient.
  • the ultrasound data may be obtained during a fetal ultrasound procedure, a trans-thoracic ultrasound feature, a cardiac ultrasound procedure, or any other suitable procedure.
  • the ultrasound probe includes an ultrasound transducer having an array of one or more ultrasound transducer elements.
  • the ultrasound transducer comprises a one-dimensional array, a 1.5-dimensional array, a l.X-dimensional array, a two-dimensional array, or any other suitable type of array of ultrasound transducer elements.
  • the array is mechanically scanned to obtain a plurality of image slices of the volume.
  • the array is operated as a solid-state array, or a phased array configured to electronically scan across the volume.
  • the three-dimensional ultrasound data is made up of a plurality of two- dimensional images or image slices.
  • the three-dimensional ultrasound data is made up of a plurality of individual scan lines of ultrasound data.
  • the ultrasound data received by the processor circuit comprises raw ultrasound signals or data. In other embodiments, the ultrasound data received by the processor circuit comprises beamformed, partially beamformed, and/or filtered ultrasound data. In some embodiments, the processor circuit receives the ultrasound data directly from the ultrasound probe. In some embodiments, the ultrasound data is first received by a memory, which stores the ultrasound data, and the processor circuit then receives the ultrasound data from the memory. Accordingly, in some embodiments, the processor circuit receives the ultrasound data indirectly from the ultrasound probe after being stored in the memory.
  • step 310 includes generating a user instruction to be output to a user output device or interface (e.g., display, speaker, etc.) to first place the ultrasound probe in a general location such that a target view is within a three-dimensional field of view of the ultrasound probe.
  • the user instruction may include a diagrammatic illustration of the patient’s body, and an indicator showing how to position and orient the probe relative to the patient’s body.
  • the processor circuit generates, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy.
  • the target image plane or view to be achieved is the trans-ventricular, trans- cerebellar, or trans-thalamic view of a fetus, an apical view of a patient’s heart, or any other suitable view.
  • generating the target image comprises using a PE and/or MPR process to automatically reconstruct the target image.
  • image processing and recognition techniques can be used, including A.I. techniques, machine learning techniques, deep learning techniques, state machines, etc.
  • generating the target image may include comparing the three-dimensional image data to a model of the anatomy.
  • generating the target image includes comparing various portions of the three-dimensional ultrasound data representative of different cross sections of the three-dimensional field of view to one or more exemplary images of the anatomy.
  • step 330 the processor circuit generates, from the three-dimensional ultrasound data, one or more adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the one or more adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data.
  • the adjacent image(s) generated during step 330 can be used to provide contextual information to the user, which can accompany the display of the target image generated in step 320.
  • the simulated motion path may be representative of a physical adjustment or movement of the ultrasound probe, such as a translation, linear movement, compression, rotation, fanning, sweeping, rocking, or any other suitable type of motion.
  • step 330 includes determining or computing the simulated motion path.
  • the processor circuit may determine or compute the simulated motion path based on the target view to be reconstructed.
  • the simulated motion path may be a parameter for reconstructing the adjacent images, in that the simulated motion path or trajectory defines the spatial relationships between the adjacent images to be reconstructed from the three- dimensional ultrasound data of the volume.
  • the path will be defined by a set of points which are smoothly connected to a curved line, e.g.
  • the points set may be a set of anatomical landmarks, such as organs to be inspected, or may follow an anatomical structure such as the spine.
  • the path may be constructed such that it shows, for education, the degrees of freedom when maneuvering the transducer, such as translating the transducer position on the skin surface, or rotating/tilting the transducer around its three principle axes.
  • Fig. 5 shows a target image 410, an adjacent image 420, and an interpolated image 430 that correspond to a simulated motion path 440.
  • the darker rectangles are representative of image planes 410, 420 reconstructed from the ultrasound data.
  • the white rectangle is representative of an interpolated image plane 430 generated using the target image 410 and the adjacent image 420.
  • the target image 410, adjacent image 420 and the interpolated image 430 are generated to correspond to a simulated motion path 440.
  • the simulated motion path 440 may be determined or computed by the processor circuit, and may be representative of a simulated type of movement of the ultrasound probe (i.e., as if the ultrasound probe were to be physically scanned across a volume).
  • the simulated motion path 440 is representative of a mixed tilting/rocking movement of the ultrasound probe.
  • the orthogonal axes 442 of the images 410, 420, 430, are tangential to the simulated motion path 440 at the point at which the images 410, 420, 430 intersect the simulated motion path 440.
  • Figs. 6A, 6B, and 6C are representative of image planes, both reconstructed and interpolated, that correspond to three different simulated motion paths.
  • the image slices include a reconstructed image slice 510 represented by a dark rectangle, and interpolated images 520, represented by the white rectangles.
  • the image slices 510, 520 are associated with a simulated linear motion path. Accordingly, the images correspond to image planes that are parallel to each other and spaced from each other.
  • Fig. 6 A the image slices include a reconstructed image slice 510 represented by a dark rectangle, and interpolated images 520, represented by the white rectangles.
  • the image slices 510, 520 are associated with a simulated linear motion path. Accordingly, the images correspond to image planes that are parallel to each other and spaced from each other.
  • the images 600 are associated with a simulated curved motion path, which includes a combination of simulated tilting and translation of the ultrasound probe.
  • the images 700 are associated with a simulated curved motion path, which includes a simulated tilting of the ultrasound probe.
  • the simulated motion path is determined or computed to include more than a single type of motion along its length.
  • a simulated motion path may be computed such that multiple image planes of interest (e.g., TC, TV, TT) would be obtained by the simulated movement of the probe along the motion path.
  • the simulated motion path may be computed to have different segments associated with different types of simulated movements (e.g., translation, rocking, tilting, rotation, compression, etc.).
  • the processor circuit is configured to identify, from the three-dimensional ultrasound data, an image of interest different from the target image, and determine the simulated motion path based on the target image plane, the adjacent image plane, and the image plane of interest. The processor circuit can then output the image of interest to the display in response to receiving a user input, as further described below.
  • the processor circuit may be configured to generate one or more interpolated images (e.g., 430, Fig. 5) to accompany the one or more adjacent images (410, 420, Fig. 5).
  • three-dimensional image data of a volume may be relatively sparse.
  • the ultrasound probe may be capable of obtaining and displaying approximately 30 frames per second.
  • sonographers may be accustomed to the ability to view two-dimensional views with a high degree of temporal and spatial resolution.
  • ultrasound imaging procedures are constrained by the speed of sound, the number of scan lines that can be obtained in a given amount of time is limited.
  • temporal and/or spatial resolution may be reduced compared to two-dimensional ultrasound imaging. Therefore, ultrasound data points (e.g., voxels) available for the reconstruction of adjacent images may be limited.
  • Fig. 7 provides a flow chart illustrating a method 800 for generating and outputting an interpolated image. It will be understood that one or more steps of the method 800 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example.
  • the processor circuit generates an adjacent image frame or image slice corresponding to a first image plane adjacent to the target image plane. Referring to Fig. 5, the target image and the adjacent image are reconstructed from the three-dimensional ultrasound data, as described above.
  • the adjacent image frame may be generated from a three-dimensional image that was created by interpolating the intensity values from a set of two- dimensional image slices obtained of a volume by an ultrasound imaging system.
  • the interpolation of the two-dimensional images is made with respect to a Cartesian coordinate system or grid.
  • the target image and the adjacent image may be generated or reconstructed using MPR.
  • the processor circuit interpolates the image position and orientation between the position and orientation of target image 410 and the adjacent image 420 to then generate the image 440 as an MPR at the interpolated position and orientation.
  • the MPR image 440 is generated based on the interpolated intensity values of the three-dimensional image, as described above.
  • the adjacent images and the interpolated image are all generated to correspond to the simulated motion path 440. Accordingly, the orthogonal axis 442 of the interpolated image, like the reconstructed adjacent images, is tangential to the curved simulated motion path.
  • the target image plane, the adjacent image plane, and the interpolated image plane are all orthogonal to the simulated motion path at the points at which the planes intersect the simulated motion path.
  • the processor circuit outputs the interpolated image to the display.
  • the processor circuit may be configured to generate one or more interpolated images between a target image and an adjacent image, and/or between adjacent images, and arrange the adjacent and interpolated images to form an image stack that can be scrolled or scanned through in response to the input from a user.
  • the processor circuit incrementally updates the display based on the extent of the input.
  • the speed at which the sonographer rotates the track ball or swipes across a track pad may determine the speed at which the processor circuit updates the display with successive adjacent and interpolated images.
  • the sonographer may have a similar amount of control to scanning through adjacent image planes as the sonographer would have had using a traditional two-dimensional imaging approach (i.e., without MPR). This type of visualization may provide for a familiar imaging experience while leveraging the workflow advantages of automated image reconstruction.
  • the interpolated image frames 510 can supplement the reconstructed adjacent image frames 520 to provide a smoother, or more natural scanning visualization of the image planes adjacent to a target image plane.
  • the processor circuit is configured to interpolate between a target image and an adjacent image, or between two adjacent images, to generate the interpolated image.
  • the interpolated image can then be output to the display in response to a user input indicating a direction of motion along the simulated motion path.
  • the interpolated image is generated by the processor circuit.
  • the interpolated image is representative of an intermediate image plane in the volume between two reconstructed images generated directly from the three-dimensional ultrasound data.
  • the interpolated image is associated with the same simulated motion path as the reconstructed adjacent image(s). Accordingly, as the user scans through the adjacent and interleaved interpolated images, the result is an image stream that more closely resembles manually scanning across a target region of an imaged volume, as performed in two-dimensional imaging procedures.
  • the interpolated images may be obtained by interpolation of the plane normal vectors and the plane offset vectors.
  • the interpolation may be performed based on the position and orientation associated with each of the reconstructed images to obtain an interpolated plane position and normal.
  • the intensity values of the 3D grid corresponding to the reconstructed images can then be interpolated to generate the intensity values for the interpolated plane.
  • motion vectors may be computed for individual pixels to generate the interpolated image.
  • the processor circuit outputs the target image slice to a display in communication with the processor circuit.
  • the processor circuit may output a graphical user interface to the display, wherein the GUI includes the target image.
  • the processor circuit receives a user input representative of a direction of motion along the simulated motion path.
  • the processor circuit may receive the user input via a user input device in communication with the processor circuit.
  • the user input device may include a mouse, keyboard, trackball, microphone, touch-screen display, track pad, physical button, or any other suitable type of user input device.
  • the user input may be provided by the sonographer by pressing an arrow key on a keyboard, where the arrow direction corresponds to a direction along the simulated motion path (e.g., forward, backward).
  • the user input is received by a scrolling device, such as a mouse, track ball, or track pad.
  • the direction of the scrolling may correspond to the direction along the simulated motion path.
  • any suitable user input could be used to indicate the direction of motion along the simulated path.
  • the processor circuit outputs an adjacent image from the plurality of adjacent images that corresponds to the direction of motion.
  • the target image is first output to the display.
  • the sonographer then uses the user input device to scan forward or backward along the simulated direction of motion, and the processor circuit outputs one or more adjacent images generated in step 330. Accordingly, the sonographer can control the display of the reconstructed images to scan forward and backward along the simulated motion path as if the user was manually scanning the ultrasound probe along the motion path, but after the ultrasound image data has already been obtained and the ultrasound probe has been set down. Using this process, the sonographer is given the benefit of both (1) automated reconstruction of a target image from a three-dimensional data set, and (2) ability to gain spatial context to confirm that the automatically reconstructed target image is correctly and/or optimally reconstructed.
  • Fig. 8 is a flow diagram of a method 900 for determining a simulated motion path based on a determination of a direction of uncertainty. It will be understood that one or more steps of the method 900 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example. In step 910, the processor circuit determines a direction of uncertainty relative to the target image plane.
  • the processor circuit determines the direction of uncertainty relative to the target image plane by applying a covariance matrix to the three-dimensional ultrasound data.
  • the simulated motion path is generated by the processor circuit based on the determined direction of uncertainty.
  • the processor circuit may determine a plurality of directions of uncertainty.
  • the processor circuit may determine a direction or vector of highest uncertainty.
  • the processor circuit determines a direction of median uncertainty, average uncertainty, minimum uncertainty, and/or any other suitable relative measure of uncertainty selected from one or more determined directions or vectors of uncertainty.
  • the processor circuit generates one or more adjacent images based on the simulated motion path determined in step 920, which is based on the determined direction of uncertainty.
  • the sonographer may be more likely to identify incorrectly-aligned target images, or to determine probe movements and adjustments to generate the target image such that it is aligned with a target view or target image plane. Further details on determining a direction associated with uncertainty in MPR procedures can be found in Alexander Schmidt-Richberg, et al., “Offset regression networks for view plane estimation in 3D fetal ultrasound,” Proc. SPIE, Vol. 10949, id. 109493K (March 15, 2019), the entirety of which is incorporated by reference.
  • Fig. 9 is a graphical user interface (GUI) 1000 of an ultrasound imaging system, where the GUI includes an anatomical diagram 1010 of a body portion, and an MPR image 1020 of an image plane within the body portion.
  • the anatomical diagram 1010 comprises a graphical representation of an anatomy of the patient (e.g., head of the fetus), and is annotated with indicators showing how various target image planes intersect the anatomy.
  • the body portion comprises a human head.
  • the diagram 1010 may be associated with a head of a fetus for use during a fetal ultrasound procedure.
  • a plurality of line indicators is shown overlaid on the diagram of anatomy 1010. The line indicators are representative of various image planes or views of interest.
  • the views associated with the indicators include a trans-ventricular (TV) image plane, a trans-cerebellar (TC) image plane, and a trans-thalamic (TT) image plane.
  • Another indicator is provided showing a reconstructed frame (MPR).
  • the MPR indicator may be a dynamic indicator that is responsive to input from a user provided using a user interface device.
  • the MPR indicator on the diagram 1010 corresponds to the MPR image 1020 shown beside the diagram 1010.
  • the GUI 1000 may display a target image corresponding to one of the views shown in the diagram 1010.
  • a user may interact with the GUI using a user interface device, such as a mouse, keyboard, track ball, voice input, touch screen input, or any other suitable input.
  • the input from the user indicates a direction of motion along a simulated motion path.
  • the user input may be received in the form of a scrolling of the mouse, an arrow button on a keyboard, and/or scrolling of a track ball.
  • the MPR indicator and the displayed MPR image 1020 can be updated in response to the received input to show adjacent MPR images reconstructed along the simulated motion path.
  • a user may scan/scroll forward and backward along the path, and the MPR indicator may be updated in real time to show the location and orientation of the image plane associated with the MPR image 1020.
  • the MPR image 1020 is also updated based on the user input to show the image corresponding to the MPR image plane illustrated with respect to the diagram 1010.
  • the sonographer can control the display of the MPR images as if the user was manually scanning the ultrasound probe up and down a trajectory, but after the ultrasound image data has already been obtained.
  • the GUI 1000 may be provided after the sonographer has obtained the three-dimensional ultrasound data, and replaced the ultrasound probe [0065]
  • the processor circuit is configured to display a looped playback of the target image and adjacent images to provide the appearance of periodically scanning forward and/or backward along the simulated motion path.
  • the processor circuit is configured to scan through a plurality of adjacent images in response to a single user input (e.g., key press, button press, etc.)
  • the processor circuit is configured to receive a user input to correct or adjust the reconstructed target image.
  • the processor circuit may be configured to receive, from the sonographer via a user input device, a selection corresponding to an adjacent image. The processor circuit may then tag, save, or otherwise identify the selected adjacent image as the target image.
  • the target image comprises an interpolated image generated as described above.
  • the adjacent images are displayed successively, and one-by-one. In other embodiments, the adjacent images are displayed alongside the target image.
  • multiple adjacent and/or interpolated images are displayed simultaneously.
  • multiple different simulated motion paths are determined. For example, motion paths corresponding to different directions of uncertainty may be provided.
  • intersecting simulated motion paths are determined, as well as additional adjacent images associated with the intersecting simulated motion paths.
  • different inputs e.g., up/down, left/right key presses
  • received through a user interface correspond to adjacent images along different simulated motions paths.
  • Embodiments of the present disclosure provide a number of benefits for automated image reconstruction procedures. For example, generating and displaying the dynamic adjacent image stream provides a familiar imaging experience, a simple and intuitive way to correct the target image, and a simple workflow transition from two-dimensional to three- dimensional ultrasound image acquisition. Further, embodiments of the present disclosure provide a user-friendly way to navigate between multiple planes of interest in one anatomy (e.g. TT, TV and TC planes for fetal ultrasound), and efficient ways to provide important anatomical context information and renderings directly related to the clinical guidelines for standard plane selection by use of anatomical objects to be or not to be included. Further, embodiments of the present disclosure allow for enhanced clinical confidence in an automatically reconstructed image.
  • TT multiple planes of interest in one anatomy
  • 900 described above may be performed by one or more components of an ultrasound imaging system, such as a processor or processor circuit, a multiplexer, a beamformer, a signal processing unit, an image processing unit, or any other suitable component of the system.
  • a processor or processor circuit such as a processor or processor circuit, a multiplexer, a beamformer, a signal processing unit, an image processing unit, or any other suitable component of the system.
  • one or more steps described above may be carried out by the processor circuit 150 described with respect to Fig. 2.
  • the processing components of the system can be integrated within the ultrasound imaging device, contained within an external console, or may be a separate component.

Abstract

An ultrasound imaging system includes a processor circuit in communication with an ultrasound transducer configured to receive three-dimensional ultrasound data of an anatomy, and generate a target image corresponding to a target image plane of the anatomy, and a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path. The processor circuit is further configured to display the target image, receive a user input representative of a direction of motion along the simulated motion path, and display an adjacent image of the plurality of adjacent images corresponding to the direction of motion. Accordingly, the user can observe the target image in its spatial context by scanning through the target image and one or more adjacent images on the display as if the ultrasound transducer were being scanned along the simulated motion path.

Description

CONTEXTUAL MULTIPLANAR RECONSTRUCTION OF THREE-DIMENSIONAL ULTRASOUND IMAGING DATA AND ASSOCIATED DEVICES. SYSTEMS. AND
METHODS
TECHNICAL FIELD
[0001] The present disclosure relates generally to the acquisition and processing of ultrasound images and, in particular, to systems and methods for reconstructing two-dimensional images from three-dimensional ultrasound image data.
BACKGROUND
[0002] Ultrasound imaging is frequently used to obtain images of internal anatomical structures of a patient. Ultrasound systems typically comprise an ultrasound transducer probe that includes a transducer array coupled to a probe housing. The transducer array is activated to vibrate at ultrasonic frequencies to transmit ultrasonic energy into the patient’s anatomy, and then receive ultrasonic echoes reflected or backscattered by the patient’s anatomy to create an image. Such transducer arrays may include various layers, including some with piezoelectric materials, which vibrate in response to an applied voltage to produce the desired pressure waves. These transducers may be used to successively transmit and receive several ultrasonic pressure waves through the various tissues of the body. The various ultrasonic responses may be further processed by an ultrasonic imaging system to display the various structures and tissues of the body.
[0003] Physicians and sonographers often desire to obtain certain views of the patient’s body such that the imaging plane of the ultrasound transducer is aligned to obtain image of a particular combination and orientation of anatomical structures. For example, in fetal ultrasound biometry, a sonographer obtains a variety of images or views of the fetal anatomy to perform measurements and assessments of the fetus’ development. Examples of these views include trans-ventricular, trans-cerebellar, and trans-thalamic image planes. Conventionally, these target views are achieved manually by an experienced sonographer positioning and orienting the ultrasound transducer while watching a real-time or near real-time image stream of the field of view of the ultrasound transducer.
[0004] When the sonographer determines that a desired view is shown, the sonographer may freeze or save the image frame to memory, and/or continue to move the probe around the target view to ensure that the desired view is actually achieved. The temporal and spatial information gained in conventional two-dimensional ultrasound imaging procedures can be useful in assessing an image, as it provides important context information in addition to a dedicated two-dimensional image to be used in the biometry assessment, for example. Thus, the sonographer has a spatial awareness of the location of the plane they are searching for based on their knowledge of the images of anatomy that are near or adjacent to the plane they want to capture. Intrinsically, the confidence that the target plane is correctly detected is based on the temporal context in the two-dimensional image stream. Nonetheless, this traditional two- dimensional acquisition workflow strongly depends on the individual skills of the sonographer. [0005] Alternatively, some ultrasound imaging systems allow for a target view to be obtained automatically from a three-dimensional ultrasound data set. Automated image reconstruction techniques can be beneficial because they depend less on the individual skills of the sonographer. For example, an ultrasound transducer array may be used to sweep out a three- dimensional volume and obtain a three-dimensional ultrasound data set representative of the volume while the sonographer holds the ultrasound probe body at a fixed position and orientation. A multiplanar reconstruction (MPR) technique can be used to generate a two- dimensional image associated with a desired view. While MPR imaging techniques beneficially allow for less-experienced sonographers to achieve a target view, one drawback of MPR is that an image may not be correctly reconstructed by the system, and the user does not have the spatial/temporal context to confirm whether or not the reconstructed image is correct or optimal. By contrast, conventional two-dimensional ultrasound imaging provides the sonographer with real-time feedback and spatial context as the user can make manual adjustments to the probe to evaluate the imaged volume around the target view.
SUMMARY
[0006] Aspects of the present disclosure provide ultrasound systems and devices that provide for multiplanar reconstructions of images with contextual visualizations to increase confidence in plane selection during an ultrasound imaging procedure. For example, in one embodiment, an ultrasound imaging system includes a processor circuit in communication with an ultrasound transducer configured to obtain a three-dimensional ultrasound data set. The processor circuit is configured to reconstruct, from the ultrasound data set, a target image or image slice corresponding to a target view or image plane, such as an apical view of the heart, trans-cerebellar, trans-thalamic, etc. Additionally, the processor circuit is configured to reconstruct, from the same three-dimensional ultrasound data set, one or more adjacent images corresponding to planes that are adjacent to the target image plane. The adjacent images are reconstructed such that the adjacent image and the target image correspond to image planes that are on a simulated motion path that also corresponds to the target image. For example, the simulated motion path may be representative of a linear translation, a sweeping or fanning motion, a tilt, rocking motion, and/or a rotation of the ultrasound probe, with the target image plane extending from a point along the simulated motion path. The reconstructed adjacent images can be output to a display such that a user can view the adjacent images part of a sequence with the target image to obtain spatial, contextual information about the target image.
In that regard, in some embodiments, the user can scan through the target image and one or more adjacent images on the display as if the ultrasound transducer were being scanned along the simulated motion path. Thus, the user may determine that the target image reconstructed from the image data is correctly-aligned with the target image plane, or determine that adjustments could be made to reconstruct the target image to be correctly aligned with the target image plane. [0007] According to one embodiment of the present disclosure, an ultrasound imaging apparatus includes a processor circuit configured to: receive, from an ultrasound probe communicatively coupled to the processor circuit, three-dimensional ultrasound data of an anatomy; generate, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generate, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; output, to a display in communication with the processor circuit, the target image; receive a user input representative of a direction of motion along the simulated motion path; and output, to the display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
[0008] In some embodiments, the apparatus further includes the ultrasound probe. In some embodiments, the processor circuit is configured to: interpolate between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image; and output the interpolated image to the display. In some embodiments, the processor circuit is configured to: determine a direction of uncertainty relative to the target image plane; and determine the simulated motion path based on the determined direction of uncertainty. In some embodiments, the processor circuit is configured to apply a covariance matrix to the three-dimensional ultrasound data to determine the direction of uncertainty.
[0009] In some embodiments, the processor circuit is configured to: identify, from the three-dimensional ultrasound data, an image plane of interest different from the target image and the adjacent image; and determine the simulated motion path based on the target image, the adjacent image, and the image of interest. In some embodiments, the processor circuit is configured to output the image of interest in response to receiving the user input. In some embodiments, the plurality of adjacent images comprises a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes. In some embodiments, the processor circuit is configured to generate the target image to exclude an anatomical feature. In some embodiments, the processor circuit is configured to generate the adjacent image to include the anatomical feature. In some embodiments, the processor circuit is further configured to output, to the display, a graphical representation of an adjacent image plane associated with the adjacent image, wherein the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
[0010] According to another embodiment, a method for reconstructing ultrasound images includes: receiving, three-dimensional ultrasound data of an anatomy obtained by an ultrasound probe; generating, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generating, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; outputting, to a display, the target image; receiving a user input representative of a direction of motion along the simulated motion path; and outputting, to a display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
[0011] In some embodiments, generating the plurality of adjacent images comprises interpolating between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image. In some embodiments, the method further comprises outputting the interpolated image to the display. In some embodiments, generating the plurality of adjacent images comprises: determining a direction of uncertainty relative to the target image plane; and determining the simulated motion path based on the determined direction of uncertainty. In some embodiments, determining the direction of uncertainty comprises applying a covariance matrix to the three-dimensional ultrasound data. [0012] In some embodiments, the method further comprises identifying, from the three- dimensional ultrasound data, an image of interest different from the target image and the adjacent image; and determining the simulated motion path based on the target image, the adjacent image, and the image of interest. In some embodiments, the method further comprises outputting the image of interest in response to receiving the user input. In some embodiments, generating the plurality of adjacent images comprises generating a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes. In some embodiments, generating the target image comprises generating the target image to exclude an anatomical feature. In some embodiments, generating the adjacent image comprises generating the adjacent image to include the anatomical feature. In some embodiments, the method further comprises outputting, to the display, a graphical representation of an adjacent image plane associated with the adjacent image. In some embodiments, the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
[0013] Additional aspects, features, and advantages of the present disclosure will become apparent from the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS
[0014] Illustrative embodiments of the present disclosure will be described with reference to the accompanying drawings, of which:
[0015] Fig. l is a schematic diagram of an ultrasound imaging system, according to embodiments of the present disclosure.
[0016] Fig. 2 is a schematic diagram of a processor circuit, according to embodiments of the present disclosure.
[0017] Fig. 3 is a diagrammatic view of image slices reconstructed from a three- dimensional ultrasound data set of a volume, according to aspects of the present disclosure. [0018] Fig. 4 is a flow diagram illustrating a method for generating and displaying contextual multiplanar image reconstructions from a three-dimensional ultrasound dataset, according to aspects of the present disclosure.
[0019] Fig. 5 is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated motion path, according to aspects of the present disclosure.
[0020] Fig. 6A is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated linear motion path, according to aspects of the present disclosure. [0021] Fig. 6B is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated curved motion path, according to aspects of the present disclosure. [0022] Fig. 6C is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated curved motion path, according to aspects of the present disclosure. [0023] Fig. 7 is a flow diagram illustrating a method for generating interpolated image slices for a contextual multiplanar image reconstruction sequence, according to aspects of the present disclosure.
[0024] Fig. 8 is a flow diagram illustrating a method for reconstructing image slices based on a direction of uncertainty for a contextual multiplanar image reconstruction sequence, according to aspects of the present disclosure.
[0025] Fig. 9 is a screen of a graphical user interface that includes a reconstructed image frame corresponding to a target view, according to aspects of the present disclosure. PET ATT, ED DESCRIPTION
[0026] For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately.
[0027] In Fig. 1, an ultrasound system 100 according to embodiments of the present disclosure is shown in block diagram form. An ultrasound imaging apparatus or ultrasound probe 10 has a transducer array 12 comprising a plurality of ultrasound transducer elements or acoustic elements. In some instances, the array 12 may include any number of acoustic elements. For example, the array 12 can include between 1 acoustic element and 100000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 300 acoustic elements, 812 acoustic elements, 3000 acoustic elements, 9000 acoustic elements, 30,000 acoustic elements, 65,000 acoustic elements, and/or other values both larger and smaller. In some instances, the acoustic elements of the array 12 may be arranged in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a l.X dimensional array (e.g., a 1.5D array), or a two-dimensional (2D) array. The array of acoustic elements (e.g., one or more rows, one or more columns, and/or one or more orientations) can be uniformly or independently controlled and activated. The array 12 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of patient anatomy.
[0028] Although the present disclosure refers to synthetic aperture external ultrasound imaging using an external ultrasound probe, it will be understood that one or more aspects of the present disclosure can be implemented in any suitable ultrasound imaging probe or system, including external ultrasound probes and intraluminal ultrasound probes. For example, aspects of the present disclosure can be implemented in ultrasound imaging systems using a mechanically-scanned external ultrasound imaging probe, an intracardiac (ICE) echocardiography catheter and/or a transesophageal echocardiography (TEE) probe, a rotational intravascular ultrasound (IVUS) imaging catheter, a phased-array IVUS imaging catheter, a transthoracic echocardiography (TTE) imaging device, or any other suitable type of ultrasound imaging device.
[0029] Referring again to Fig. 1, the acoustic elements of the array 12 may comprise one or more piezoelectric/piezoresistive elements, lead zirconate titanate (PZT), piezoelectric micromachined ultrasound transducer (PMUT) elements, capacitive micromachined ultrasound transducer (CMUT) elements, and/or any other suitable type of acoustic elements. The one or more acoustic elements of the array 12 are in communication with (e.g., electrically coupled to) electronic circuitry 14. In some embodiments, such as the embodiment of Fig. 1, the electronic circuitry 14 can comprise a microbeamformer (pBF). In other embodiments, the electronic circuitry comprises a multiplexer circuit (MUX). The electronic circuitry 14 is located in the probe 10 and communicatively coupled to the transducer array 12. In some embodiments, one or more components of the electronic circuitry 14 can be positioned in the probe 10. In some embodiments, one or more components of the electronic circuitry 14, can be positioned in a computing device or processing system 28. The computing device 28 may be or include a processor, such as one or more processors in communication with a memory. As described further below, the computing device 28 may include a processor circuit as illustrated in Fig. 2.
In some aspects, some components of the electronic circuitry 14 are positioned in the probe 10 and other components of the electronic circuitry 14 are positioned in the computing device 28. The electronic circuitry 14 may comprise one or more electrical switches, transistors, programmable logic devices, or other electronic components configured to combine and/or continuously switch between a plurality of inputs to transmit signals from each of the plurality of inputs across one or more common communication channels. The electronic circuitry 14 may be coupled to elements of the array 12 by a plurality of communication channels. The electronic circuitry 14 is coupled to a cable 16, which transmits signals including ultrasound imaging data to the computing device 28. [0030] In the computing device 28, the signals are digitized and coupled to channels of a system beamformer 22, which appropriately delays each signal. The delayed signals are then combined to form a coherent steered and focused receive beam. System beamformers may comprise electronic hardware components, hardware controlled by software, or a microprocessor executing beamforming algorithms. In that regard, the beamformer 22 may be referenced as electronic circuitry. In some embodiments, the beamformer 22 can be a system beamformer, such as the system beamformer 22 of Fig. 1, or it may be a beamformer implemented by circuitry within the ultrasound probe 10. In some embodiments, the system beamformer 22 works in conjunction with a microbeamformer (e.g., electronic circuitry 14) disposed within the probe 10. The beamformer 22 can be an analog beamformer in some embodiments, or a digital beamformer in some embodiments. In the case of a digital beamformer, the system includes A/D converters which convert analog signals from the array 12 into sampled digital echo data. The beamformer 22 generally will include one or more microprocessors, shift registers, and or digital or analog memories to process the echo data into coherent echo signal data. Delays are effected by various means such as by the time of sampling of received signals, the write/read interval of data temporarily stored in memory, or by the length or clock rate of a shift register as described in U.S. Patent No. 4,173,007 to McKeighen et ah, the entirety of which is hereby incorporated by reference herein. Additionally, in some embodiments, the beamformer can apply appropriate weight to each of the signals generated by the array 12. The beamformed signals from the image field are processed by a signal and image processor 24 to produce 2D or 3D images for display on an image display 30. The signal and image processor 24 may comprise electronic hardware components, hardware controlled by software, or a microprocessor executing image processing algorithms. It generally will also include specialized hardware or software which processes received echo data into image data for images of a desired display format such as a scan converter. In some embodiments, beamforming functions can be divided between different beamforming components. For example, in some embodiments, the system 100 can include a microbeamformer located within the probe 10 and in communication with the system beamformer 22. The microbeamformer may perform preliminary beamforming and/or signal processing that can reduce the number of communication channels required to transmit the receive signals to the computing device 28. [0031] Control of ultrasound system parameters such as scanning mode (e.g., B-mode,
M-mode), probe selection, beam steering and focusing, and signal and image processing is done under control of a system controller 26 which is coupled to various modules of the system 100. The system controller 26 may be formed by application specific integrated circuits (ASICs) or microprocessor circuitry and software data storage devices such as RAMs, ROMs, or disk drives. In the case of the probe 10, some of this control information may be provided to the electronic circuitry 14 from the computing device 28 over the cable 16, conditioning the electronic circuitry 14 for operation of the array as required for the particular scanning procedure. The user inputs these operating parameters by means of a user interface device 20.
[0032] In some embodiments, the image processor 24 is configured to generate images of different modes to be further analyzed or output to the display 30. For example, in some embodiments, the image processor can be configured to compile a B-mode image, such as a live B-mode image, of an anatomy of the patient. In other embodiments, the image processor 24 is configured to generate or compile an M-mode image. An M-mode image can be described as an image showing temporal changes in the imaged anatomy along a single scan line.
[0033] It will be understood that the computing device 28 may comprise hardware circuitry, such as a computer processor, application-specific integrated circuit (ASIC), field- programmable gate array (FPGA), capacitors, resistors, and/or other electronic devices, software, or a combination of hardware and software. In some embodiments, the computing device 28 is a single computing device. In other embodiments, the computing device 28 comprises separate computer devices in communication with one another.
[0034] Fig. 2 is a schematic diagram of a processor circuit 150, according to embodiments of the present disclosure. The processor circuit 150 may be implemented in the computing device 28, the signal and image processor 24, the controller 26, and/or the probe 10 of Fig. 1. As shown, the processor circuit 150 may include a processor 160, a memory 164, and a communication module 168. These elements may be in direct or indirect communication with each other, for example via one or more buses.
[0035] The processor 160 may include a central processing unit (CPU), a digital signal processor (DSP), an ASIC, a controller, an FPGA, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 160 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0036] The memory 164 may include a cache memory (e.g., a cache memory of the processor 160), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory. In an embodiment, the memory 164 includes a non-transitory computer-readable medium. The memory 164 may store instructions 166. The instructions 166 may include instructions that, when executed by the processor 160, cause the processor 160 to perform the operations described herein with reference to the processor 28 and/or the probe 10 (Fig. 1). Instructions 166 may also be referred to as code. The terms “instructions” and “code” should be interpreted broadly to include any type of computer- readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.
[0037] The communication module 168 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor 28, the probe 10, and/or the display 30. In that regard, the communication module 168 can be an input/output (I/O) device. In some instances, the communication module 168 facilitates direct or indirect communication between various elements of the processor circuit 150 and/or the processing system 106 (Fig. 1A).
[0038] As mentioned above, some ultrasound imaging systems are configured to obtain three-dimensional ultrasound data of a volume of the patient. In some embodiments, an ultrasound imaging system comprises an ultrasound transducer comprising an array of one or more transducer elements, where the ultrasound transducer is configured to scan or sweep through a volume of the patient to obtain a three-dimensional ultrasound data set of the volume. This three-dimensional ultrasound data can be used to generate three-dimensional images or models of the anatomy. Alternatively, two-dimensional images corresponding to a variety of imaging planes that intersect the volume can be reconstructed from the three-dimensional ultrasound data. In that regard, Fig. 3 shows a plurality of image slices 210, 220 forming a three- dimensional ultrasound data set of a volume 200. The slices 210, 220 are obtained as part of an image sequence using an ultrasound transducer array 12. In the illustrated embodiment, the array 12 comprises a two-dimensional array of ultrasound transducer elements that can be controlled as a phased array to electronically steer ultrasonic beams of acoustic energy to sweep out the image slices 210, 220, in a plurality of corresponding image planes. In other embodiments, the ultrasound transducer array 12 may comprise a single ultrasound transducer element, or a one dimensional array of transducer elements in which the transducer 12 manually scans in one or more degrees of freedom to acquire the three-dimensional data set of the volume 200.
[0039] In Fig 3, the image slices 210, 220 may correspond to image planes of an image sequence performed by the ultrasound transducer. Thus, the three-dimensional data set of the volume 200 may be formed by acquiring ultrasound data in a step-wise fashion where an image slice is obtained by steering or sweeping the ultrasound beams or scan lines across the corresponding image plane, and combining the plurality of different image slices 210, 220 to form the three-dimensional data set. In some embodiments, the slices 210, 220 may be combined into a three-dimensional image by determining interpolated intensity values from the different slices 210, 220, such that an intensity value can be determined for any location (e.g., in a Cartesian coordinate system) in the three-dimensional image. Additionally, Fig. 3 shows a reconstructed image 230, which is reconstructed from the three-dimensional data set after the image sequence has completed scanning the volume 200. Thus, the reconstructed image 230, in contrast to the image slices 210, 220, does not directly correspond to the two-dimensional segments of the three-dimensional imaging sequence, and does not represent an image plane that intersects the ultrasound transducer 12. For example, the reconstructed image 230 may represent an image plane passing through the volume if the ultrasound transducer 12 had been moved to a different position and orientation relative to the volume 200. However, in some embodiments, an image can be reconstructed from the three-dimensional data set that is representative of an imaging plane that does intersect the ultrasound transducer 12.
[0040] In some embodiments, multiplanar reconstruction (MPR) can be used to reconstruct, from the three-dimensional ultrasound data, a target image corresponding to a target view or imaging plane of the patient (e.g., trans-ventricular, trans-cerebellar, trans-thalamic, apical view, etc.). For example, MPR may be used to generate the reconstructed image 230 shown in Fig. 3. MPR can be performed using input from various image processing techniques, including artificial intelligence (A.I.), machine learning, and/or deep learning techniques, to identify a two-dimensional cross section in the three-dimensional ultrasound data that includes a combination and/or arrangement of anatomical structures associated with a particular view. In that regard, while MPR involves generating an image for an identified plane within a three- dimensional data set, the estimation or identification of the plane that will be used in the MPR image may be referred to as plane estimation (PE). Methods for PE are described in, for example, U.S. Patent No. 6,443,896, issued September 3, 2002, U.S. Publication No. 2014/0155737, filed August 13, 2012, U.S. Publication No. 2010/0268085, filed November 13, 2008, and U.S. Publication No. 2019/0272667, filed June 12, 2017, and Joseph Redmon, et al., “You Only Look Once: Unified, Real-Time Object Detection,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788, each of which is hereby incorporated by reference in its entirety.
[0041] While PE beneficially allows for the automatic reconstruction of a target image with less input from the sonographer (i.e., manual adjustment of probe to achieve the target view), PE techniques may have some drawbacks. For example, the sonographer may not be able to easily confirm that an image generated using PE and displayed as an MPR is the correct or optimal view. In that regard, because the target image is automatically reconstructed and displayed with little or no interaction from the sonographer, the sonographer may lack spatial context that would otherwise be gained if the sonographer was manually scanning the probe across the anatomy around the target view. Thus, confidence that the correct image has been acquired may be diminished. Further, an incorrect or suboptimal image plane may be selected by the PE algorithm, and the sonographer may not be able to easily identify what adjustments are to be made to more completely achieve a desired view.
[0042] Accordingly, the present disclosure provides devices, systems, and associated methods for providing contextual visualizations associated with automatically reconstructed two- dimensional images, such as images generated from PE. In particular, the present disclosure provides for automated image reconstruction techniques that allow for a target image to be generated and displayed, while still providing spatial contextual information to the sonographer so that the sonographer can confirm that the correct image plane has been chosen to automatically reconstruct the target image. For example, in an exemplary workflow, the sonographer places an ultrasound probe configured to obtain three-dimensional ultrasound data on the skin of the patient at a location generally associated with the target image plane. A three- dimensional data set of a volume of the patient is acquired, and the sonographer can set the probe down. A processing system and/or processor circuit analyzes the three-dimensional ultrasound data using, for example, PE and/or MPR, to identify, reconstruct, or otherwise generate a target image associated with a target image plane or view, such as the trans-cerebellar view. The target image can then be displayed using MPR to the user, which may allow the user to make measurements or assessments of the anatomy in the target image.
[0043] Further, the processor circuit is configured to provide spatial context for the target image by generating a plurality of adjacent images that are associated with imaging planes adjacent to the target image plane. The adjacent images are generated such that the target image and the adjacent images all correspond to a common simulated motion path or trajectory. The simulated motion path is computed so that scanning through the adjacent images and the target image simulates physical scanning or pacing of the ultrasound probe around the target image plane. As described below, in an exemplary embodiment, the sonographer may scan through the adjacent images along the simulated motion path using a keyboard, track ball, or other interface device as though the user was manually moving the probe around the target image plane. Accordingly, the sonographer may have increased confidence that the processor circuit has correctly and/or optimally reconstructed the target image. Further, by scanning through the reconstructed adjacent image, the sonographer may be able to determine whether the target image reconstruction should be revised or redone.
[0044] Accordingly, embodiments of the present disclosure provide for an experience of seeing two-dimensional images that are proximate or near to a target plane. The two- dimensional images can be created or reconstructed from a three-dimensional data set, thereby recreating a traditional experience in order to improve the sonographer’ s confidence that the desired anatomically planes were correctly detected by the automated procedure. Specifically, the systems and methods described herein enable the simulation of an ultrasound probe movement which allows a smooth transition from one automatically detected plane to another automatically detected plane. These smooth image transitions emulate the transducer movements, which are familiar to the sonographer from a standard two-dimensional ultrasound exam. Smooth transitions between different view planes can be achieved, for example, by interpolating the plane parameters (normal and offset vectors) and thus obtaining continuous spline trajectories for two-dimensional plane displacement in three-dimensional space.
[0045] The sequence of image frames shown to the user may be a compilation of adjacent MPRs determined based on geometric spacing. In some embodiments, the sequence of images or image frames shown includes a selection of geometrically adjacent images that have been selected, e.g., by an A. I. algorithm, to be somewhat similar to the target image plane. For example, in case of a fetal head biometry assessment, where multiple images are captured at different image planes, the proposed dynamic representation can be used to perform interpolation between respective image plane parameters. These interpolated images can be used to dynamically transition from one selected plane to the other, thus performing a transition through the three-dimensional volume in which the planes are located.
[0046] Once the target image is generated the three-dimensional data set, one or more adjacent images could be generated in a number of ways. In one embodiment, adjacent image frames correspond to parallel planes to the target plane. In applications where more than one plane of interest is within the volume (e.g., abdominal and thoracic, trans-ventricular (TV), trans- cerebellar (TC), trans-thalamic (TT)), a set of adjacent image frames can be created by interpolating between the planes of interest. For this purpose, interpolation can be done in a fixed order, e.g., always going from TT to TV, or by finding the two closest planes. In some embodiments, an A.I. -based approach for plane detection provides a way of measuring uncertainty such that a direction of highest uncertainty may be identified. Images may be interpolated along the direction of highest uncertainty. This would leverage an efficient manual correction of automatically determined images.
[0047] In applications where the anatomical target image plane is required to exclude certain anatomical structures or features, the adjacent images can be generated such that at least one adjacent image includes of the anatomical feature to be excluded, the other adjacent images are generated to show the anatomical feature disappearing from the field of view (e.g. the cerebellum in the TV plane of the fetal brain). For educational or reporting purpose, the adjacent image set can be displayed with a schematic anatomical view, since the three-dimensional volume together with the estimated plane geometries and located anatomical objects provides sufficient information for the registration to a three-dimensional anatomical model. [0048] Fig. 4 is a flow chart illustrating a method 300 for providing spatial context for automated ultrasound image plane reconstructions. It will be understood that one or more steps of the method 300 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example. In step 310, a processor circuit receives, from an ultrasound probe communicatively coupled to the processor circuit, three- dimensional ultrasound data of an anatomy of a patient. In some embodiments, the ultrasound data may be obtained during a fetal ultrasound procedure, a trans-thoracic ultrasound feature, a cardiac ultrasound procedure, or any other suitable procedure. The ultrasound probe includes an ultrasound transducer having an array of one or more ultrasound transducer elements. In some embodiments, the ultrasound transducer comprises a one-dimensional array, a 1.5-dimensional array, a l.X-dimensional array, a two-dimensional array, or any other suitable type of array of ultrasound transducer elements. In some embodiments, the array is mechanically scanned to obtain a plurality of image slices of the volume. In some embodiments, the array is operated as a solid-state array, or a phased array configured to electronically scan across the volume. In some embodiments, the three-dimensional ultrasound data is made up of a plurality of two- dimensional images or image slices. In some embodiments, the three-dimensional ultrasound data is made up of a plurality of individual scan lines of ultrasound data.
[0049] In some embodiments, the ultrasound data received by the processor circuit comprises raw ultrasound signals or data. In other embodiments, the ultrasound data received by the processor circuit comprises beamformed, partially beamformed, and/or filtered ultrasound data. In some embodiments, the processor circuit receives the ultrasound data directly from the ultrasound probe. In some embodiments, the ultrasound data is first received by a memory, which stores the ultrasound data, and the processor circuit then receives the ultrasound data from the memory. Accordingly, in some embodiments, the processor circuit receives the ultrasound data indirectly from the ultrasound probe after being stored in the memory.
[0050] In some embodiments, step 310 includes generating a user instruction to be output to a user output device or interface (e.g., display, speaker, etc.) to first place the ultrasound probe in a general location such that a target view is within a three-dimensional field of view of the ultrasound probe. For example, the user instruction may include a diagrammatic illustration of the patient’s body, and an indicator showing how to position and orient the probe relative to the patient’s body. [0051] In step 320, the processor circuit generates, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy. For example, as described above, the target image plane or view to be achieved is the trans-ventricular, trans- cerebellar, or trans-thalamic view of a fetus, an apical view of a patient’s heart, or any other suitable view. In some embodiments, generating the target image comprises using a PE and/or MPR process to automatically reconstruct the target image. However, a variety of image processing and recognition techniques can be used, including A.I. techniques, machine learning techniques, deep learning techniques, state machines, etc. In some embodiments, generating the target image may include comparing the three-dimensional image data to a model of the anatomy. In some embodiments, generating the target image includes comparing various portions of the three-dimensional ultrasound data representative of different cross sections of the three-dimensional field of view to one or more exemplary images of the anatomy.
[0052] In step 330, the processor circuit generates, from the three-dimensional ultrasound data, one or more adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the one or more adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data. The adjacent image(s) generated during step 330 can be used to provide contextual information to the user, which can accompany the display of the target image generated in step 320.
[0053] As mentioned above, the simulated motion path may be representative of a physical adjustment or movement of the ultrasound probe, such as a translation, linear movement, compression, rotation, fanning, sweeping, rocking, or any other suitable type of motion. In some embodiments, step 330 includes determining or computing the simulated motion path. For example, the processor circuit may determine or compute the simulated motion path based on the target view to be reconstructed. The simulated motion path may be a parameter for reconstructing the adjacent images, in that the simulated motion path or trajectory defines the spatial relationships between the adjacent images to be reconstructed from the three- dimensional ultrasound data of the volume. In some embodiments the path will be defined by a set of points which are smoothly connected to a curved line, e.g. based on spline interpolation techniques. The points set may be a set of anatomical landmarks, such as organs to be inspected, or may follow an anatomical structure such as the spine. In another embodiment, the path may be constructed such that it shows, for education, the degrees of freedom when maneuvering the transducer, such as translating the transducer position on the skin surface, or rotating/tilting the transducer around its three principle axes.
[0054] Fig. 5 shows a target image 410, an adjacent image 420, and an interpolated image 430 that correspond to a simulated motion path 440. In the illustrated embodiment, the darker rectangles are representative of image planes 410, 420 reconstructed from the ultrasound data. The white rectangle is representative of an interpolated image plane 430 generated using the target image 410 and the adjacent image 420. The target image 410, adjacent image 420 and the interpolated image 430 are generated to correspond to a simulated motion path 440. In that regard, the simulated motion path 440 may be determined or computed by the processor circuit, and may be representative of a simulated type of movement of the ultrasound probe (i.e., as if the ultrasound probe were to be physically scanned across a volume). In the illustrated embodiment, the simulated motion path 440 is representative of a mixed tilting/rocking movement of the ultrasound probe. The orthogonal axes 442 of the images 410, 420, 430, are tangential to the simulated motion path 440 at the point at which the images 410, 420, 430 intersect the simulated motion path 440.
[0055] It will be understood that a variety of simulated motion paths are contemplated by the present disclosure. The simulated motions paths may be pre-determined or selected based on the target imaging plane desired to be obtained. In that regard, Figs. 6A, 6B, and 6C are representative of image planes, both reconstructed and interpolated, that correspond to three different simulated motion paths. For example, in Fig. 6 A, the image slices include a reconstructed image slice 510 represented by a dark rectangle, and interpolated images 520, represented by the white rectangles. The image slices 510, 520 are associated with a simulated linear motion path. Accordingly, the images correspond to image planes that are parallel to each other and spaced from each other. In Fig. 6B, the images 600 are associated with a simulated curved motion path, which includes a combination of simulated tilting and translation of the ultrasound probe. In Fig. 6C, the images 700 are associated with a simulated curved motion path, which includes a simulated tilting of the ultrasound probe. In some embodiments, the simulated motion path is determined or computed to include more than a single type of motion along its length. For example, a simulated motion path may be computed such that multiple image planes of interest (e.g., TC, TV, TT) would be obtained by the simulated movement of the probe along the motion path. Thus, the simulated motion path may be computed to have different segments associated with different types of simulated movements (e.g., translation, rocking, tilting, rotation, compression, etc.). In some embodiments, the processor circuit is configured to identify, from the three-dimensional ultrasound data, an image of interest different from the target image, and determine the simulated motion path based on the target image plane, the adjacent image plane, and the image plane of interest. The processor circuit can then output the image of interest to the display in response to receiving a user input, as further described below.
[0056] As shown in Figs. 5 and 6A-6C, the processor circuit may be configured to generate one or more interpolated images (e.g., 430, Fig. 5) to accompany the one or more adjacent images (410, 420, Fig. 5). In that regard, in some instances, three-dimensional image data of a volume may be relatively sparse. For example, with conventional two-dimensional imaging procedures, the ultrasound probe may be capable of obtaining and displaying approximately 30 frames per second. Accordingly, sonographers may be accustomed to the ability to view two-dimensional views with a high degree of temporal and spatial resolution. However, because ultrasound imaging procedures are constrained by the speed of sound, the number of scan lines that can be obtained in a given amount of time is limited. Thus, for three- dimensional ultrasound imaging, temporal and/or spatial resolution may be reduced compared to two-dimensional ultrasound imaging. Therefore, ultrasound data points (e.g., voxels) available for the reconstruction of adjacent images may be limited.
[0057] The present application provides that one or more images may be interpolated using one or more reconstructed image frames. In that regard, Fig. 7 provides a flow chart illustrating a method 800 for generating and outputting an interpolated image. It will be understood that one or more steps of the method 800 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example. In step 810, the processor circuit generates an adjacent image frame or image slice corresponding to a first image plane adjacent to the target image plane. Referring to Fig. 5, the target image and the adjacent image are reconstructed from the three-dimensional ultrasound data, as described above. In that regard, the adjacent image frame may be generated from a three-dimensional image that was created by interpolating the intensity values from a set of two- dimensional image slices obtained of a volume by an ultrasound imaging system. In some embodiments, the interpolation of the two-dimensional images is made with respect to a Cartesian coordinate system or grid. The target image and the adjacent image may be generated or reconstructed using MPR.
[0058] Referring to Figs. 5 and 8, in step 820, the processor circuit interpolates the image position and orientation between the position and orientation of target image 410 and the adjacent image 420 to then generate the image 440 as an MPR at the interpolated position and orientation. In some aspects, the MPR image 440 is generated based on the interpolated intensity values of the three-dimensional image, as described above. As shown in Fig. 5, the adjacent images and the interpolated image are all generated to correspond to the simulated motion path 440. Accordingly, the orthogonal axis 442 of the interpolated image, like the reconstructed adjacent images, is tangential to the curved simulated motion path. In other words, the target image plane, the adjacent image plane, and the interpolated image plane are all orthogonal to the simulated motion path at the points at which the planes intersect the simulated motion path. In step 830, the processor circuit outputs the interpolated image to the display. For example, the processor circuit may be configured to generate one or more interpolated images between a target image and an adjacent image, and/or between adjacent images, and arrange the adjacent and interpolated images to form an image stack that can be scrolled or scanned through in response to the input from a user. For example, in some embodiments, as the sonographer scrolls up or down using a mouse, track ball, etc., the processor circuit incrementally updates the display based on the extent of the input. For example, the speed at which the sonographer rotates the track ball or swipes across a track pad may determine the speed at which the processor circuit updates the display with successive adjacent and interpolated images. Thus, the sonographer may have a similar amount of control to scanning through adjacent image planes as the sonographer would have had using a traditional two-dimensional imaging approach (i.e., without MPR). This type of visualization may provide for a familiar imaging experience while leveraging the workflow advantages of automated image reconstruction.
[0059] For example, referring again to Fig. 6A, the interpolated image frames 510 can supplement the reconstructed adjacent image frames 520 to provide a smoother, or more natural scanning visualization of the image planes adjacent to a target image plane. Thus, in some embodiments, the processor circuit is configured to interpolate between a target image and an adjacent image, or between two adjacent images, to generate the interpolated image. The interpolated image can then be output to the display in response to a user input indicating a direction of motion along the simulated motion path. In an exemplary embodiment, the interpolated image is generated by the processor circuit. The interpolated image is representative of an intermediate image plane in the volume between two reconstructed images generated directly from the three-dimensional ultrasound data. The interpolated image is associated with the same simulated motion path as the reconstructed adjacent image(s). Accordingly, as the user scans through the adjacent and interleaved interpolated images, the result is an image stream that more closely resembles manually scanning across a target region of an imaged volume, as performed in two-dimensional imaging procedures.
[0060] The interpolated images may be obtained by interpolation of the plane normal vectors and the plane offset vectors. The interpolation may be performed based on the position and orientation associated with each of the reconstructed images to obtain an interpolated plane position and normal. The intensity values of the 3D grid corresponding to the reconstructed images can then be interpolated to generate the intensity values for the interpolated plane. In some embodiments, motion vectors may be computed for individual pixels to generate the interpolated image. Methods and techniques for generating interpolated image frames can be found at least in U.S. Patent No. 7,771,354, issued August 10, 2010, the entirety of which is hereby incorporated by reference.
[0061] In step 340, the processor circuit outputs the target image slice to a display in communication with the processor circuit. As described below, the processor circuit may output a graphical user interface to the display, wherein the GUI includes the target image. In step 350, the processor circuit receives a user input representative of a direction of motion along the simulated motion path. For example, the processor circuit may receive the user input via a user input device in communication with the processor circuit. The user input device may include a mouse, keyboard, trackball, microphone, touch-screen display, track pad, physical button, or any other suitable type of user input device. In that regard, the user input may be provided by the sonographer by pressing an arrow key on a keyboard, where the arrow direction corresponds to a direction along the simulated motion path (e.g., forward, backward). In some embodiments, the user input is received by a scrolling device, such as a mouse, track ball, or track pad. The direction of the scrolling may correspond to the direction along the simulated motion path. However, it will be understood that any suitable user input could be used to indicate the direction of motion along the simulated path. [0062] In step 360, the processor circuit outputs an adjacent image from the plurality of adjacent images that corresponds to the direction of motion. For example, in some embodiments, the target image is first output to the display. The sonographer then uses the user input device to scan forward or backward along the simulated direction of motion, and the processor circuit outputs one or more adjacent images generated in step 330. Accordingly, the sonographer can control the display of the reconstructed images to scan forward and backward along the simulated motion path as if the user was manually scanning the ultrasound probe along the motion path, but after the ultrasound image data has already been obtained and the ultrasound probe has been set down. Using this process, the sonographer is given the benefit of both (1) automated reconstruction of a target image from a three-dimensional data set, and (2) ability to gain spatial context to confirm that the automatically reconstructed target image is correctly and/or optimally reconstructed.
[0063] In determining or computing the simulated motion path, it may be beneficial to identify a simulated path that is relevant to providing confidence assessment that the automatically reconstructed target image is correctly determined. In that regard, the simulated motion path may be determined based on a determination of a direction of uncertainty relative to the target image plane. Fig. 8 is a flow diagram of a method 900 for determining a simulated motion path based on a determination of a direction of uncertainty. It will be understood that one or more steps of the method 900 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example. In step 910, the processor circuit determines a direction of uncertainty relative to the target image plane. In some embodiments, the processor circuit determines the direction of uncertainty relative to the target image plane by applying a covariance matrix to the three-dimensional ultrasound data. In step 920, the simulated motion path is generated by the processor circuit based on the determined direction of uncertainty. In some embodiments, the processor circuit may determine a plurality of directions of uncertainty. In some embodiments, the processor circuit may determine a direction or vector of highest uncertainty. In other embodiments, the processor circuit determines a direction of median uncertainty, average uncertainty, minimum uncertainty, and/or any other suitable relative measure of uncertainty selected from one or more determined directions or vectors of uncertainty. In step 930, the processor circuit generates one or more adjacent images based on the simulated motion path determined in step 920, which is based on the determined direction of uncertainty. With the adjacent images (and/or interpolated images) generated based on the determined direction of uncertainty, the sonographer may be more likely to identify incorrectly-aligned target images, or to determine probe movements and adjustments to generate the target image such that it is aligned with a target view or target image plane. Further details on determining a direction associated with uncertainty in MPR procedures can be found in Alexander Schmidt-Richberg, et al., “Offset regression networks for view plane estimation in 3D fetal ultrasound,” Proc. SPIE, Vol. 10949, id. 109493K (March 15, 2019), the entirety of which is incorporated by reference.
[0064] Fig. 9 is a graphical user interface (GUI) 1000 of an ultrasound imaging system, where the GUI includes an anatomical diagram 1010 of a body portion, and an MPR image 1020 of an image plane within the body portion. The anatomical diagram 1010 comprises a graphical representation of an anatomy of the patient (e.g., head of the fetus), and is annotated with indicators showing how various target image planes intersect the anatomy. In the illustrated embodiment, the body portion comprises a human head. For example, the diagram 1010 may be associated with a head of a fetus for use during a fetal ultrasound procedure. A plurality of line indicators is shown overlaid on the diagram of anatomy 1010. The line indicators are representative of various image planes or views of interest. In that regard, the views associated with the indicators include a trans-ventricular (TV) image plane, a trans-cerebellar (TC) image plane, and a trans-thalamic (TT) image plane. Another indicator is provided showing a reconstructed frame (MPR). In some embodiments, the MPR indicator may be a dynamic indicator that is responsive to input from a user provided using a user interface device. In the illustrated embodiment, the MPR indicator on the diagram 1010 corresponds to the MPR image 1020 shown beside the diagram 1010. For example, in some embodiments, the GUI 1000 may display a target image corresponding to one of the views shown in the diagram 1010. A user may interact with the GUI using a user interface device, such as a mouse, keyboard, track ball, voice input, touch screen input, or any other suitable input. The input from the user indicates a direction of motion along a simulated motion path. For example, the user input may be received in the form of a scrolling of the mouse, an arrow button on a keyboard, and/or scrolling of a track ball. The MPR indicator and the displayed MPR image 1020 can be updated in response to the received input to show adjacent MPR images reconstructed along the simulated motion path. A user may scan/scroll forward and backward along the path, and the MPR indicator may be updated in real time to show the location and orientation of the image plane associated with the MPR image 1020. The MPR image 1020 is also updated based on the user input to show the image corresponding to the MPR image plane illustrated with respect to the diagram 1010.
Thus, the sonographer can control the display of the MPR images as if the user was manually scanning the ultrasound probe up and down a trajectory, but after the ultrasound image data has already been obtained. For example, the GUI 1000 may be provided after the sonographer has obtained the three-dimensional ultrasound data, and replaced the ultrasound probe [0065] Various modifications to the devices, systems, and/or methods described above could be made without departing from the scope of the present disclosure. For example, in some embodiments, the processor circuit is configured to display a looped playback of the target image and adjacent images to provide the appearance of periodically scanning forward and/or backward along the simulated motion path. In some embodiments, the processor circuit is configured to scan through a plurality of adjacent images in response to a single user input (e.g., key press, button press, etc.) In some embodiments, the processor circuit is configured to receive a user input to correct or adjust the reconstructed target image. For example, in some embodiments, the processor circuit may be configured to receive, from the sonographer via a user input device, a selection corresponding to an adjacent image. The processor circuit may then tag, save, or otherwise identify the selected adjacent image as the target image. In some embodiments, the target image comprises an interpolated image generated as described above. In some embodiments, the adjacent images are displayed successively, and one-by-one. In other embodiments, the adjacent images are displayed alongside the target image. In some embodiments, multiple adjacent and/or interpolated images are displayed simultaneously. In some embodiments, multiple different simulated motion paths are determined. For example, motion paths corresponding to different directions of uncertainty may be provided. In some embodiments, intersecting simulated motion paths are determined, as well as additional adjacent images associated with the intersecting simulated motion paths. Thus, in some embodiments, different inputs (e.g., up/down, left/right key presses) received through a user interface correspond to adjacent images along different simulated motions paths.
[0066] Embodiments of the present disclosure provide a number of benefits for automated image reconstruction procedures. For example, generating and displaying the dynamic adjacent image stream provides a familiar imaging experience, a simple and intuitive way to correct the target image, and a simple workflow transition from two-dimensional to three- dimensional ultrasound image acquisition. Further, embodiments of the present disclosure provide a user-friendly way to navigate between multiple planes of interest in one anatomy (e.g. TT, TV and TC planes for fetal ultrasound), and efficient ways to provide important anatomical context information and renderings directly related to the clinical guidelines for standard plane selection by use of anatomical objects to be or not to be included. Further, embodiments of the present disclosure allow for enhanced clinical confidence in an automatically reconstructed image.
[0067] It will be understood that one or more of the steps of the methods 300, 800, and
900 described above, may be performed by one or more components of an ultrasound imaging system, such as a processor or processor circuit, a multiplexer, a beamformer, a signal processing unit, an image processing unit, or any other suitable component of the system. For example, one or more steps described above may be carried out by the processor circuit 150 described with respect to Fig. 2. The processing components of the system can be integrated within the ultrasound imaging device, contained within an external console, or may be a separate component.
[0068] Persons skilled in the art will recognize that the apparatus, systems, and methods described above can be modified in various ways. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular exemplary embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure.

Claims

CLAIMS What is claimed is:
1. An ultrasound imaging apparatus, comprising: a processor circuit configured to: receive, from an ultrasound probe communicatively coupled to the processor circuit, three-dimensional ultrasound data of an anatomy; generate, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generate, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; output, to a display in communication with the processor circuit, the target image; receive a user input representative of a direction of motion along the simulated motion path; and output, to the display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
2. The ultrasound imaging apparatus of claim 1, further comprising the ultrasound probe.
3. The ultrasound imaging apparatus of claim 1, wherein the processor circuit is configured to: interpolate between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image; and output the interpolated image to the display.
4. The ultrasound imaging apparatus of claim 1, wherein the processor circuit is configured to: determine a direction of uncertainty relative to the target image plane; and determine the simulated motion path based on the determined direction of uncertainty.
5. The ultrasound imaging apparatus of claim 4, wherein the processor circuit is configured to apply a covariance matrix to the three-dimensional ultrasound data to determine the direction of uncertainty.
6. The ultrasound imaging apparatus of claim 1, wherein the processor circuit is configured to: identify, from the three-dimensional ultrasound data, an image plane of interest different from the target image and the adjacent image; and determine the simulated motion path based on the target image, the adjacent image, and the image of interest.
7. The ultrasound imaging apparatus of claim 6, wherein the processor circuit is configured to output the image of interest in response to receiving the user input.
8. The ultrasound imaging apparatus of claim 1, wherein the plurality of adjacent images comprises a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes.
9. The ultrasound imaging apparatus of claim 1, wherein the processor circuit is configured to generate the target image to exclude an anatomical feature, and wherein the processor circuit is configured to generate the adjacent image to include the anatomical feature.
10. The ultrasound imaging apparatus of claim 1, wherein the processor circuit is further configured to output, to the display, a graphical representation of an adjacent image plane associated with the adjacent image, wherein the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
11. A method for reconstructing ultrasound images, comprising: receiving, three-dimensional ultrasound data of an anatomy obtained by an ultrasound probe; generating, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generating, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; outputting, to a display, the target image; receiving a user input representative of a direction of motion along the simulated motion path; and outputting, to a display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
12. The method of claim 11, wherein generating the plurality of adjacent images comprises interpolating between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image, and wherein the method further comprises outputting the interpolated image to the display.
13. The method of claim 11, wherein generating the plurality of adjacent images comprises: determining a direction of uncertainty relative to the target image plane; and determining the simulated motion path based on the determined direction of uncertainty.
14. The method of claim 13, wherein determining the direction of uncertainty comprises applying a covariance matrix to the three-dimensional ultrasound data.
15. The method of claim 11, further comprising: identifying, from the three-dimensional ultrasound data, an image of interest different from the target image and the adjacent image; and determining the simulated motion path based on the target image, the adjacent image, and the image of interest.
16. The method of claim 15, further comprising outputting the image of interest in response to receiving the user input.
17. The method of claim 11, wherein generating the plurality of adjacent images comprises generating a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes.
18. The method of claim 11, wherein generating the target image comprises generating the target image to exclude an anatomical feature, and wherein generating the adjacent image comprises generating the adjacent image to include the anatomical feature.
19. The method of claim 11, further comprising: outputting, to the display, a graphical representation of an adjacent image plane associated with the adjacent image, wherein the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
PCT/EP2021/054251 2020-03-05 2021-02-22 Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods WO2021175629A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2022552408A JP2023517512A (en) 2020-03-05 2021-02-22 Contextual multiplanar reconstruction of 3D ultrasound image data and related apparatus, systems, and methods
CN202180019152.4A CN115243621A (en) 2020-03-05 2021-02-22 Background multiplanar reconstruction of three dimensional ultrasound imaging data and associated devices, systems, and methods
EP21707653.8A EP4114272A1 (en) 2020-03-05 2021-02-22 Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods
US17/908,289 US20230054610A1 (en) 2020-03-05 2021-02-22 Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062985536P 2020-03-05 2020-03-05
US62/985,536 2020-03-05

Publications (1)

Publication Number Publication Date
WO2021175629A1 true WO2021175629A1 (en) 2021-09-10

Family

ID=74701481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/054251 WO2021175629A1 (en) 2020-03-05 2021-02-22 Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods

Country Status (5)

Country Link
US (1) US20230054610A1 (en)
EP (1) EP4114272A1 (en)
JP (1) JP2023517512A (en)
CN (1) CN115243621A (en)
WO (1) WO2021175629A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116135521B (en) * 2023-04-14 2023-07-28 易加三维增材技术(杭州)有限公司 Scanning path display method, device and system and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4173007A (en) 1977-07-01 1979-10-30 G. D. Searle & Co. Dynamically variable electronic delay lines for real time ultrasonic imaging systems
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US20070255139A1 (en) * 2006-04-27 2007-11-01 General Electric Company User interface for automatic multi-plane imaging ultrasound system
US7771354B2 (en) 2002-12-04 2010-08-10 Koninklijke Philips Electronics N.V. High frame rate three dimensional ultrasound imager
US20100268085A1 (en) 2007-11-16 2010-10-21 Koninklijke Philips Electronics N.V. Interventional navigation using 3d contrast-enhanced ultrasound
US20140155737A1 (en) 2011-08-16 2014-06-05 Koninklijke Philips N.V. Curved multi-planar reconstruction using fiber optic shape data
US9107607B2 (en) * 2011-01-07 2015-08-18 General Electric Company Method and system for measuring dimensions in volumetric ultrasound data
US20150262353A1 (en) * 2014-03-17 2015-09-17 Samsung Medison Co., Ltd. Method and apparatus for changing at least one of direction and position of plane selection line based on pattern
US20190272667A1 (en) 2016-06-10 2019-09-05 Koninklijke Philips N.V. Systems and methods for generating b-mode images from 3d ultrasound data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4173007A (en) 1977-07-01 1979-10-30 G. D. Searle & Co. Dynamically variable electronic delay lines for real time ultrasonic imaging systems
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US7771354B2 (en) 2002-12-04 2010-08-10 Koninklijke Philips Electronics N.V. High frame rate three dimensional ultrasound imager
US20070255139A1 (en) * 2006-04-27 2007-11-01 General Electric Company User interface for automatic multi-plane imaging ultrasound system
US20100268085A1 (en) 2007-11-16 2010-10-21 Koninklijke Philips Electronics N.V. Interventional navigation using 3d contrast-enhanced ultrasound
US9107607B2 (en) * 2011-01-07 2015-08-18 General Electric Company Method and system for measuring dimensions in volumetric ultrasound data
US20140155737A1 (en) 2011-08-16 2014-06-05 Koninklijke Philips N.V. Curved multi-planar reconstruction using fiber optic shape data
US20150262353A1 (en) * 2014-03-17 2015-09-17 Samsung Medison Co., Ltd. Method and apparatus for changing at least one of direction and position of plane selection line based on pattern
US20190272667A1 (en) 2016-06-10 2019-09-05 Koninklijke Philips N.V. Systems and methods for generating b-mode images from 3d ultrasound data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXANDER SCHMIDT-RICHBERG ET AL.: "Offset regression networks for view plane estimation in 3D fetal ultrasound", PROC. SPIE, vol. 10949, 15 March 2019 (2019-03-15), XP060120522, DOI: 10.1117/12.2512697
JOSEPH REDMON ET AL.: "You Only Look Once: Unified, Real-Time Object Detection", THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR, 2016, pages 779 - 788, XP033021255, DOI: 10.1109/CVPR.2016.91

Also Published As

Publication number Publication date
CN115243621A (en) 2022-10-25
JP2023517512A (en) 2023-04-26
EP4114272A1 (en) 2023-01-11
US20230054610A1 (en) 2023-02-23

Similar Documents

Publication Publication Date Title
US10617384B2 (en) M-mode ultrasound imaging of arbitrary paths
US10835210B2 (en) Three-dimensional volume of interest in ultrasound imaging
US20210321978A1 (en) Fat layer identification with ultrasound imaging
JP6835587B2 (en) Motion-adaptive visualization in medical 4D imaging
JP7346586B2 (en) Method and system for acquiring synthetic 3D ultrasound images
CN115426954A (en) Biplane and three-dimensional ultrasound image acquisition for generating roadmap images and associated systems and devices
JP2021530334A (en) Systems and methods for guiding the acquisition of ultrasound images
EP3547923B1 (en) Ultrasound imaging system and method
US11484286B2 (en) Ultrasound evaluation of anatomical features
US20230054610A1 (en) Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods
CN113573645A (en) Method and system for adjusting field of view of ultrasound probe
US20180168536A1 (en) Intervolume lesion detection and image preparation
US20230094631A1 (en) Ultrasound imaging guidance and associated devices, systems, and methods
US20220160333A1 (en) Optimal ultrasound-based organ segmentation
US20240111045A1 (en) Method and apparatus for deep learning-based ultrasound beamforming
EP4014884A1 (en) Apparatus for use in analysing an ultrasound image of a subject
US20200222030A1 (en) Ultrasound image apparatus and method of controlling the same
WO2023208877A1 (en) Analysing an ultrasound image feed

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21707653

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022552408

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021707653

Country of ref document: EP

Effective date: 20221005