WO2007056104A2 - High frequency array ultrasound system - Google Patents

High frequency array ultrasound system Download PDF

Info

Publication number
WO2007056104A2
WO2007056104A2 PCT/US2006/042891 US2006042891W WO2007056104A2 WO 2007056104 A2 WO2007056104 A2 WO 2007056104A2 US 2006042891 W US2006042891 W US 2006042891W WO 2007056104 A2 WO2007056104 A2 WO 2007056104A2
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasound
transmit
transducer
signal
arrayed
Prior art date
Application number
PCT/US2006/042891
Other languages
French (fr)
Other versions
WO2007056104A3 (en
WO2007056104A9 (en
Inventor
James Mehi
Ronald E. Daigle
Laurence C. Brasfield
Brian Starkoski
Jerrold Wen
Kai Wen Liu
Lauren S. Pflugrath
Stuart F. Foster
Desmond Hirson
Original Assignee
Visualsonics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visualsonics Inc. filed Critical Visualsonics Inc.
Priority to ES06827417T priority Critical patent/ES2402741T3/en
Priority to CN200680050246.3A priority patent/CN101351724B/en
Priority to EP06827417A priority patent/EP1952175B1/en
Priority to CA2628100A priority patent/CA2628100C/en
Priority to JP2008539044A priority patent/JP5630958B2/en
Publication of WO2007056104A2 publication Critical patent/WO2007056104A2/en
Publication of WO2007056104A9 publication Critical patent/WO2007056104A9/en
Publication of WO2007056104A3 publication Critical patent/WO2007056104A3/en
Priority to HK09106667.0A priority patent/HK1129243A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52085Details related to the ultrasound signal acquisition, e.g. scan sequences
    • G01S7/52095Details related to the ultrasound signal acquisition, e.g. scan sequences using multiline receive beamforming
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • A61B8/565Details of data transmission or power supply involving data transmission via a network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8909Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration
    • G01S15/8915Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8909Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration
    • G01S15/8915Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array
    • G01S15/8927Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array using simultaneously or sequentially two or more subarrays or subapertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/895Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques characterised by the transmitted frequency spectrum
    • G01S15/8956Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques characterised by the transmitted frequency spectrum using frequencies at or above 20 MHz
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8997Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using synthetic aperture techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52034Data rate converters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52085Details related to the ultrasound signal acquisition, e.g. scan sequences
    • G01S7/52087Details related to the ultrasound signal acquisition, e.g. scan sequences using synchronization techniques
    • G01S7/52088Details related to the ultrasound signal acquisition, e.g. scan sequences using synchronization techniques involving retrospective scan line rearrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/18Methods or devices for transmitting, conducting or directing sound
    • G10K11/26Sound-focusing or directing, e.g. scanning
    • G10K11/34Sound-focusing or directing, e.g. scanning using electrical steering of transducer arrays, e.g. beam steering
    • G10K11/341Circuits therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/18Methods or devices for transmitting, conducting or directing sound
    • G10K11/26Sound-focusing or directing, e.g. scanning
    • G10K11/34Sound-focusing or directing, e.g. scanning using electrical steering of transducer arrays, e.g. beam steering
    • G10K11/341Circuits therefor
    • G10K11/346Circuits therefor using phase variation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8959Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using coded signals for correlation purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52019Details of transmitters
    • G01S7/5202Details of transmitters for pulse systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/523Details of pulse systems
    • G01S7/524Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/523Details of pulse systems
    • G01S7/526Receivers

Definitions

  • Ultrasound echography systems using an arrayed transducer have been used in human clinical applications where the desired image resolution is in the order of millimeters. Operating frequencies in these clinical systems are typically below 10 MHz. With these low operating frequencies, however, such systems are not appropriate for imaging where higher resolutions are needed, for example in imaging small animals such as mice or small tissue structures in humans.
  • the heart rate of an adult mouse may be as high as 500 beats per minute, so high frame rate capability may be desired.
  • the width of the region being imaged, the field of view, should also be sufficient to include the entire organ being studied.
  • Ultrasound systems for imaging at frequencies above 15 MHz have been developed using a single element transducer.
  • arrayed transducers offer better image quality, can achieve higher acquisition frame rates and offer other advantages over single element transducer systems.
  • a system and method for acquiring an ultrasound signal comprised of a signal processing unit adapted for acquiring a received ultrasound signal from a ultrasound transducer having a plurality of elements.
  • the system can be adapted to receive ultrasound signals having a frequency of at least 15 megahertz (MHz) with a fixed transducer having a field of view of at least 5.0 millimeters (mm) at a frame rate of at least 20 frames per second (fps).
  • the signal processing unit can further produce an ultrasound image from the acquired ultrasound signal.
  • the transducer can be, but is not limited to, a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, or a curved array transducer.
  • the system can include such a transducer or be adapted to operate with such a transducer.
  • Also provided herein is a system and method for acquiring an ultrasound signal comprising a processing unit for acquiring received ultrasound signals from an ultrasound transducer operating at a transmit and receive frequency of at least 15 MHz, wherein the processing unit comprises a signal sampler that uses quadrature sampling to acquire the ultrasound signal.
  • FIG. 1 is a representation in block diagram form of a computing operating environment
  • FIGS. 2A-2C are exemplary top, bottom and cross-sectional views of an exemplary schematic PZT stack of the present invention, the top view showing, at the top and bottom of the PZT stack, portions of the ground electric layer extending outwardly from the overlying lens; the bottom view showing, at the longitudinally extending edges, exposed portions of the dielectric layer between individual signal electrode elements (as one will appreciate, not show in the center portion of the PZT stack are the lines showing the individualized signal electrode elements - one signal electrode per element of the PZT stack);
  • FIG. 3 A is a top plan view of an interposer for use with the PZT stack of FIGS. 2A-2C, showing electrical traces extending outwardly from adjacent the central opening of the transducer and ground electrical traces located at the top and bottom portions of the interposer, showing a dielectric layer disposed thereon a portion of the surface of the interposer, the dielectric layer defining an array of staggered wells positioned along an axis parallel to the longitudinal axis of the interposer, each well communicating with an electrical trace of the interposer, and further showing a solder paste ball bump mounted therein each well in the dielectric layer such that, when a PZT stack is mounted thereon the dielectric layer and heat is applied, the solder melts to form the desired electrical continuity between the individual element signal electrodes and the individual trances on the interposer- the well helping to retain the solder within the confines of the well;
  • FIG. 3B is a partial enlarged view of the staggered wells of the dielectric layer and the electrical traces of the underlying interposer of FIG. 3 A, the well sized to accept the solder paste ball bumps;
  • FIG. 4A is a top plan view of the PZT stack of FIG. 2A mounted thereon the dielectric layer and the interposer of FIG. 3 A;
  • FIG. 4B is a top plan view of the PZT stack of FIG. 2A mounted thereon the dielectric layer and interposer of FIG. 3 A, showing the PZT stack as a transparent layer to illustrate the mounting relationship between the PZT stack and the underlying interposer, the solder paste ball bumps mounted therebetween forming an electrical connection between the respective element signal electrodes and the electrical traces on the interposer;
  • FIG. 4B is a top plan view of the PZT stack of FIG. 2A mounted thereon the dielectric layer and interposer of FIG. 3 A, showing the PZT stack as a transparent layer to illustrate the mounting relationship between the PZT stack and the underlying interposer, the solder paste ball bumps mounted therebetween forming an electrical connection between the respective element signal electrodes and the electrical traces on the interposer;
  • FIG. 4B is a top plan view of the PZT stack of FIG. 2A mounted thereon the dielectric layer and interposer of FIG. 3 A, showing the PZT stack as a transparent
  • 5A is a schematic top plan view of an exemplary circuit board for mounting the transducer of the present invention thereto, the circuit board having a plurality of board electrical traces fo ⁇ ned thereon, each board electrical trace having a proximal end adapted to couple to an electrical trace of the transducer and a distal end adapted to couple to a connector, such as, for example, a cable for communication of signals therethrough;
  • FIG. 5B is a top plan view of an exemplary circuit board for mounting of an exemplary 256-element array having a 75 micron pitch;
  • FIG. 5C is a top plan view of the vias of the circuit board of FIG. 5B that are in communication with an underlying ground layer of the circuit board;
  • FIG. 6 is a top plan view of a portion of the exemplified circuit board showing, in Region
  • FIG. 7A is a partial enlarged cross-sectional view of Region A of FIG. 6, showing the dielectric layer positioned about the solder paste ball bumps and between the PZT stack and the interposer;
  • FIG. 7B is a partial enlarged cross-sectional view of Region B of FIG. 6, showing the dielectric layer between the PZT stack and the interposer;
  • FIGS. 8A and 8B are partial cross-sectional views of an exemplified transducer mounted to a portion of the circuit board;
  • FIG. 9 is an enlarged partial view Region B of an exemplified transducer mounted to a portion of the circuit board;
  • FIG. 10 is a partial enlarged cross-sectional view of a transducer that does not include an interposer, showing a solder paste ball bump mounted thereon the underlying circuit board, each ball bump being mounted onto one board electrical trace of the circuit board, and showing the PZT stack being mounted thereon so that the respective element signal electrodes of the PZT stack are in electrical continuity, via the respective ball bumps, to their respective board electrical trace of the circuit board;
  • FIG. HA is a partial enlarged cross-sectional view of FIG. 10, showing the ground electrode layer of the transducer without an interposer wire bonded to ground pads of the circuit board;
  • FIG. 1 IB is a partial enlarged cross-sectional view of FIG. 10, showing the ball bump disposed therebetween and in electrical communication with the electrical trace of the circuit board and the element signal electrode of the PZT stack;
  • FIG. 12A is a schematic showing the flex circuit board and a pair of Samtec BTH-090 connectors mounted to a rigid portion of the circuit board;
  • FIG. 12B is an exemplary pin-out table for the connector shown in FIGS. 5B and 12A;
  • FIG. 13 is a schematic showing a side view of the individual coaxial cables that are to be operatively coupled to the pair of Samtec BTH-090 connectors on the flex circuit board via a pair of BSH-090 connectors;
  • FIG. 14 is a schematic showing an exemplary plan view of half of the coaxial leads therein the cable connected to one of the BSH-090 connectors;
  • FIG. 15 A is an illustration of an exemplary plan view of the distal end of a medical cable assembly connected to the folded flex circuit board, the cable's proximal end (not shown) may include a multi-pin ZIF connector that interfaces with the ultrasound system and maybe used to practice one or more aspects of the present invention;
  • FIG. 15B illustrates an exemplary termination pin-out for the individual coax cables of a medical cable assembly to a multi-pin ZIF connector having an exemplary ZIF connector such as an ITT Cannon DLM6 connector;
  • FIG. 16 is a block diagram illustrating an exemplary high frequency ultrasonic imaging system
  • FIG. 17 is a block diagram further illustrating the exemplary high frequency ultrasonic imaging system shown in FIG. 16;
  • FIG. 18a is a schematic diagram illustrating exemplary receive beamformers, transmit beamformers, front end electronics, and associated components;
  • FIG. 18b is an exemplary embodiment providing additional detail of the front end electronics shown in FIG. 18a;
  • FIG. 18c is an exemplary embodiment of a receive controller (RX controller) in an embodiment according to the present invention
  • FIG. 18d is an illustration of an exemplary transmit controller (TX controller) in an embodiment according to the present invention
  • FIG. 19 is a system signal processing block diagram illustrating an exemplary beamformer control board
  • FIG. 20 is a schematic diagram of a TX/RX Switch and Pulser and related circuitry
  • FIG. 21 is a schematic diagram of an alternative embodiment of a TX/RX Switch and Pulser and related circuitry
  • FIG. 22 is a block diagram for an exemplary transmit beamformer control
  • FIGS.22A-22C illustrate how exemplary waveshape data can be used to change the fine delay, pulse width and dead time for "A" and "B" signals;
  • FIG. 24 illustrates a systems electronics overview of an exemplary high frequency ultrasonic imaging system
  • FIG. 25 shows an exemplary single channel delay scheme for quadrature sampling
  • FIG. 25B is an alternative way of implementing interpolation filters, phase rotation and dynamic apodization according to an embodiment of the invention.
  • FIG. 26 illustrates an exemplary control RAM for storing receive control signals
  • FIG. 26A shows exemplary beamformer delay control signals for center and outer elements of an arrayed transducer
  • FIG. 27 is a block diagram of an exemplary transmit/receive synchronization scheme
  • FIG. 27A is a block diagram of an alternate exemplary transmuVreceive synchronization scheme
  • FIG. 28 illustrates an exemplary RF memory buffer for storage of beamformer output
  • FIG. 29 illustrates an exemplary system software overview an exemplary high frequency ultrasonic imaging system
  • FIG. 30 is an exemplary main system software application overview for an exemplary high frequency ultrasonic imaging system
  • FIG. 31 illustrates an exemplary modular system overview for an exemplary high frequency ultrasonic imaging system
  • FIG. 32 displays an exemplary transmit frequency, half cycle on time, and pulse durations
  • FIG. 33 illustrates exemplary bandwidth sampling of 30 MHz signal spectrum
  • FIG. 34 illustrates an exemplary quadrature sampled sine wave at 0.9 times the sample frequency
  • FIG. 34A is an exemplary illustration of the 16 sample points of FIG. 34 with respect to Q and I sampling points;
  • FIG. 34B is an exemplary illustration of a window of eight samples used by an exemplary FIR filter for interpolation of points 0-3, between Q and I samples;
  • FIG. 34C is the exemplary window of FIG. 34 moved forward by one sample in order to interpolate points 4-15;
  • FIG. 35 displays exemplary interpolated points for I and Q waveforms
  • FIG. 36 displays exemplary quadrature samples data set for single ray line acquisition from a linear array
  • FIGS . 37 A and 37B display two exemplary channel signals returned from the same range point, but with a path length difference corresponding to one-half wavelength;
  • FIG. 38 displays 3-1 multi-line scanning with an exemplary curved array transducer
  • FIG. 39 displays a conceptual implementation of an interpolation delay method
  • FIG. 40 displays an exemplary 3-1 multi-line operation of an interpolation delay method
  • FIG. 41 is a schematic design of Complimentary Hubert Transform Filters.
  • Ranges may be expressed herein as from “about” one particular value, and/or to "about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent "about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
  • aspects of the exemplary systems disclosed herein can be implemented via a general- purpose computing device such as one in the form of a computer 101 shown in FIG. 1.
  • the components of the computer 101 can include, but are not limited to, one or more processors or processing units 103, a system memory 112, and a system bus 113 that couples various system components including the processor 103 to the system memory 112.
  • the system bus 113 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VES A) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VES A Video Electronics Standards Association
  • PCI Peripheral Component Interconnects
  • Mezzanine bus Peripheral Component Interconnects
  • the bus 113, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the processor 103, a mass storage device 104, an operating system 105, application software 106, data 107, anetwork adapter 108, systemmemory 112, an Input/Output Interface 110, a display adapter 109, a display device 111, and a human machine interface 102, can be contained within one or more remote computing devices 114a,b,c at physically separate locations, connected through buses of this fo ⁇ n, in effect implementing a fully distributed system.
  • the computer 101 typically includes a variety of computer readable media. Such media can be any available media that is accessible by the computer 101 and includes both volatile and non- volatile media, removable and non-removable media.
  • the system memory 112 includes computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM).
  • RAM random access memory
  • ROM read only memory
  • the system memory 112 typically contains data such as data 107 and/or program modules such as operating system 105 and application software 106 that are immediately accessible to and/or are presently operated on by the processing unit 103.
  • the computer 101 may also include other removable/non-removable, volatile/non-volatile computer storage media.
  • FIG. 1 illustrates a mass storage device 104 which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computer 101.
  • a mass storage device 104 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like.
  • Any number of program modules can be stored on the mass storage device 104, including byway of example, an operating system 105 and application software 106. Each of the operating system 105 and application software 106 (or some combination thereof) may include elements of the programming and the application software 106. Data 107 can also be stored on the mass storage device 104. Data 104 can be stored in any of one or more databases known in the art. Examples of such databases include, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like. The databases can be centralized or distributed across multiple systems.
  • a user can enter commands and information into the computer 101 via an input device (not shown).
  • input devices include, but are not limited to, a keyboard, pointing device (e.g., a "mouse"), a microphone, a joystick, a serial port, a scanner, and the like.
  • a human machine interface 102 that is coupled to the system bus 113, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).
  • the user interface can be chosen from one or more of the input devices listed above.
  • the user interface can also include various control devices such as toggle switches, sliders, variable resistors and other user interface devices known in the art.
  • the user interface can be connected to the processing unit 103. It can also be connected to other functional blocks of the exemplary system described herein in conjunction with or without connection with the processing unit 103 connections described herein.
  • a display device 111 can also be connected to the system bus 113 via an interface, such as a display adapter 109.
  • a display device can be a monitor or an LCD (Liquid Crystal Display).
  • other output peripheral devices can include components such as speakers (not shown) and a printer (not shown) which can be connected to the computer 101 via Input/Output Interface 110.
  • the computer 101 can operate in a networked environment using logical connections to one or more remote computing devices 114a,b,c.
  • a remote computing device can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and so on.
  • Logical connections between the computer 101 and a remote computing device 114a,b,c can be made via a local area network (LAN) and a general wide area network (WAN).
  • LAN local area network
  • WAN general wide area network
  • Such network connections can be through a network adapter 108.
  • a network adapter 108 can be implemented in both wired and wireless environments. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet 115.
  • the remote computer 114a,b,c may be a server, a router, a peer device or other common network node, and typically includes all or many of the elements already described for the computer 101.
  • program modules and data may be stored on the remote computer 114a,b,c.
  • the logical connections include a LAN and a WAN. Other connection methods may be used, and networks may include such things as the "world wide web" or Internet.
  • Computer readable media can be any available media that can be accessed by a computer.
  • Computer readable media may comprise "computer storage media” and "communications media.”
  • Computer storage media include volatile and non- volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD- ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • An implementation of the disclosed method may be stored on or transmitted across some form of computer readable media.
  • the processing of the disclosed method can be performed by software components.
  • the disclosed method may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices.
  • program modules include computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the disclosed method may also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • the hardware implementation can include any or a combination of the following technologies, which are all well known in the art: discrete electronic components, a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit having appropriate logic gates, a programmable gate array(s) (PGA), field programmable gate array(s) (FPGA), etc.
  • the software comprises an ordered listing of executable instructions for implementing logical functions, and can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
  • aspects of the exemplary systems can be implemented in computerized systems.
  • aspects of the exemplary systems including for instance the computing unit 101, can be operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the system and method include, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples include set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the system and method may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • the described embodiments enable in vivo visualization, assessment, and measurement of anatomical structures and hemodynamic function in longitudinal imaging studies of small animals.
  • the systems can provide images having very high resolution, image uniformity, depth of field, adjustable transmit focal depths, multiple transmit focal zones for multiple uses.
  • the ultrasound image can be of a subject or an anatomical portion thereof, such as a heart or a heart valve.
  • the image can also be of blood and can be used for applications including evaluation of the vascularization of tumors.
  • the systems can be used to guide needle injections.
  • the described embodiments can also be used for human clinical, medical, manufacturing (e.g., ultrasonic inspections, etc.) or other applications where producing an image at a transmit frequency of 15 MHz or higher is desired.
  • Embodiments according to the described systems can comprise one or more of the following, which are described in greater detail herein: an array transducer that can be operatively connected to a processing system that may be comprised of one or more of signal and image processing capabilities; digital transmit and receive beamformer subsystems; analog front end electronics; a digital beamformer controller subsystem; a high voltage subsystem; a computer module; a power supply module; a user interface; software to run the beamformer; a scan converter, and other system features as described herein.
  • An arrayed transducer used in the system can be incoiporated into a scanhead that, in one embodiment, may be attached to a fixture during imaging which allows the operator to acquire images free of the vibrations and shaking that usually result from "free hand" imaging.
  • a small animal subject may also be positioned on a heated platform with access to anesthetic equipment, and a means to position the scanhead relative to the subject in a flexible manner.
  • the scanhead can be attached to a fixture during imaging.
  • the fixture can have various features, such as freedom of motion in three dimensions, rotational freedom, a quick release mechanism, etc.
  • the fixture can be part of a "rail system" apparatus, and can integrate with the heated mouse platform.
  • the systems can be used with platforms and apparatus used in imaging small animals including "rail guide” type platforms with maneuverable probe holder apparatuses.
  • the described systems can be used with multi-rail imaging systems, and with small animal mount assemblies as described in U.S. Patent Application No. 10/683,168, entitled “Integrated Multi- Rail Imaging System,” U.S. Patent Application No. 10/053,748, entitled “Integrated Multi-Rail Imaging System,” U.S. Patent Application No. 10/683,870, now U.S. Patent No. 6,851,392, issued February 8, 2005, entitled “Small Animal Mount Assembly," and U.S. Patent Application No. 11/053,653, entitled “Small Animal Mount Assembly,” which are each fully incorporated herein by reference.
  • an embodiment of the system may include means for acquiring ECG and temperature signals for processing and display.
  • An embodiment of the system may also display physiological waveforms such as an ECG, respiration or blood pressure waveform.
  • a system for acquiring ultrasound signals comprising a signal processing unit adapted for acquiring a received ultrasound signal from an ultrasound transducer having a plurality of elements.
  • the system can be adapted to receive ultrasound signals having a frequency of at least 15 megahertz (MHz) with a transducer having a field of view of at least 5.0 millimeters (mm) at a frame rate of at least 20 frames per second (fps).
  • the ultrasound signals can be acquired at an acquisition rate of 50, 100, or 200 (fps).
  • ultrasound signals can be acquired at an acquisition rate of 200 frames per second (fps) or higher.
  • the received ultrasound signals can be acquired at a frame rate within the range of about 100 fps to about 200 fps.
  • the length of the transducer is equal to the field of view.
  • the field of view can be wide enough to include organs of interest such as the small animal heart and surrounding tissue for cardiology, and full length embryos for abdominal imaging.
  • the two-way bandwidth of the transducer can be approximately 50% to 100%.
  • the two-way bandwidth of the transducer can be approximately 60% to 70%.
  • Two-way bandwidth refers to the bandwidth of the transducer that results when the transducer is used both as a transmitter of ultrasound and a receiver- that is, the two-way bandwidth is the bandwidth of the one-way spectrum squared.
  • the processing unit produces an ultrasound image from the acquired ultrasound signal(s).
  • the acquired signals may be processed to generate an ultrasound image at display rate that is slower than the acquisition rate.
  • the generated ultrasound image can have a display rate of 100 fps or less.
  • the generated ultrasound image has a display rate of 30 fps or less.
  • the field of view can range from about 2.0 mm to about 30.0 mm.
  • the processing unit can acquire the received ultrasound signals at an acquisition rate of at least 300 frames per second (fps). In other examples, the acquisition rate can be 50, 100, 200 or more frames per second (fps).
  • the image generated using the disclosed systems may have a lateral resolution of about 150 microns ( ⁇ m) or less and an axial resolution of about 75 microns ( ⁇ m) or less.
  • the image can have an axial resolution of about 30 microns ( ⁇ m).
  • embodiments according to the present invention transmit ultrasound that may be focused at a depth of about 1.0 mm to about 30.0 mm.
  • the transmitted ultrasound can be focused at a depth of about 3.0 mm to about 10.0 mm.
  • the transmitted ultrasound can be focused at a depth of about 2.0 mm to about 12.0 mm, of about 1.0 mm to about 6.0 mm, of about 3.0 mm to about 8.0 mm, or of about 5.0 mm to about 30.0 mm.
  • the transducer can be, but is not limited to, a linear array transducer, a phased array transducer, a two-dimensional (2 -D) array transducer, or a curved array transducer.
  • a linear array is typically flat, i.e., all of the elements lie in the same (flat) plane.
  • a curved linear array is typically configured such that the elements lie in a curved plane.
  • the transducers described herein are "fixed" transducers. The term “fixed” means mat the transducer array does not utilize movement in its azimuthal direction during transmission or receipt of ultrasound in order to achieve its desired operating parameters, or to acquire a frame of ultrasound data.
  • the term “fixed” may also mean that the transducer is not moved in an azimuthal or longitudinal direction relative to the scan head, probe, or portions thereof during operation.
  • the described transducers, which are fixed as described, are referred to throughout as an "array,” a “transducer,” an “ultrasound transducer,” an “ultrasound array,” an “array transducer,” an “arrayed transducer,” an “ultrasonic transducer” or combinations of these terms, or by other terms which would be recognized by those skilled in the art as referring to an ultrasound transducer.
  • the transducers as described herein can be moved between the acquisition of ultrasound frames, for example, the transducer can be moved between scan planes after acquiring a frame of ultrasound data, but such movement is not required for their operation.
  • the transducer of the present system can be moved relative to the object imaged while still remaining fixed as to the operating parameters.
  • the transducer can be moved relative to the subject during operation to change position of the scan plane or to obtain different views of the subject or its underlying anatomy.
  • Arrayed transducers are comprised of a number of elements.
  • the transducer used to practice one or more aspects of the present invention comprises at least 64 elements.
  • the transducer comprises 256 elements.
  • the transducer can also comprise fewer or more than 256 elements.
  • the transducer elements can be separated by a distance equal to about one-half the wavelength to about two times the wavelength of the center transmit frequency of the transducer (referred to herein as the "element pitch.”). In one aspect, the transducer elements are separated by a distance equal to about the wavelength of the center transmit frequency of the transducer.
  • the center transmit frequency of the transducer used is equal to or greater than 15MHz.
  • the center transmit frequency can be approximately 15MHz, 20, MHz, 30MHz, 40MHz, 50MHz, 55 MHz or higher.
  • the ultrasound transducer can transmit ultrasound into the subject at a center frequency within the range of about 15 MHz to about 80 MHz.
  • the transducer has a center operating frequency of at least 15 MHz and the transducer has an element pitch equal to or less than 2.0 times the wavelength of sound at the transducer's transmitted center frequency.
  • the transducer can also have an element pitch equal to or less than 1.5 times the wavelength of sound at the transducers transmitted center frequency.
  • one transducer that may be used with the described system can be, among others, an arrayed transducer as described in U.S. Patent Application No. 11/109,986, entitled “Arrayed Ultrasonic Transducer,” filed April 20, 2005 and published on December 8,
  • the transducer may also comprise an array of piezoelectric elements which can be electronically steered using variable pulsing and delay mechanisms.
  • the processing system may include multiple transducer ports for the interface of one or more transducers or scanheads. As previously described, a scanhead can be hand held or mounted to rail system and the scanhead cable can be flexible.
  • each element of the transducer can be operatively connected to a receive channel of a processing unit.
  • the number of transducer elements is greater than the number of receive channels.
  • the transducer may comprise at least 64 elements that are operatively connected to at least 32 receive channels.
  • 256 elements are operatively connected to 64 receive channels.
  • 256 elements are operatively connected to 128 receive channels.
  • 256 elements are operatively connected to 256 receive channels.
  • Each element can also be operatively connected to a transmit channel.
  • the system can further comprise one or more signal samplers for each receive channel.
  • the signal samplers can be analog-to-digital converters (ADCs).
  • ADCs analog-to-digital converters
  • the signal samplers can use direct sampling techniques to sample the received signals.
  • the signal samplers can use bandwidth sampling to sample the received signals.
  • the signal samplers can use quadrature sampling to sample the received signals.
  • the signal samplers comprise sampling clocks shifted 90 degrees out of phase.
  • the receive clock frequency can be approximately equal to the center frequency of a received ultrasound signal but may be different from the transmit frequency. For example, in many situations, the center frequency of the received signal has been shifted lower than the center frequency of the transmit signal due to frequency dependent attenuation in the tissue being imaged. For these situations the receive sample clock frequency can be lower than the transmit frequency.
  • An acquired signal can be processed using an interpolation filtration method.
  • a delay resolution can be used, which can be less than the receive clock period.
  • the delay resolution can be, for example, 1/16 of the receive clock period.
  • the processing unit can comprise a receive beamformer.
  • the receive beamformer can be implemented using at least one field programmable gate array (FPGA) device.
  • the processing unit can also comprise a transmit beamformer.
  • the transmit beamformer can also be implemented using at least one FPGA device.
  • 512 lines of ultrasound are generated, transmitted into the subject and received from the subject for each frame of the generated ultrasound image.
  • 256 lines of ultrasound can also be generated, transmitted into the subject and received from the subject for each frame of the generated ultrasound image.
  • at least two lines of ultrasound can be generated, transmitted into the subject and received from the subject at each element of the array for each frame of the generated ultrasound image.
  • one line of ultrasound is generated, transmitted into the subject and received from the subject at each element of the array for each frame of the generated ultrasound image.
  • the ultrasound systems described herein can be used in multiple imaging modes.
  • the systems can be used to produce an image in B-mode, M-mode, Pulsed Wave (PW) Doppler mode, power Doppler mode, color flow Doppler mode, RF-mode and 3-D mode.
  • PW Pulsed Wave
  • the systems can be used in Color Flow Imaging modes, including directional velocity color flow, Power Doppler imaging and Tissue Doppler imaging.
  • the systems can also be used with Steered PW Doppler, with very high pulse repetition frequencies (PRF).
  • PRF pulse repetition frequencies
  • the systems can also be used in M-Mode, with simultaneous B-Mode, for cardiology or other applications where such techniques are desired.
  • the system can optionally be used in Duplex and Triplex modes, in which M-Mode and PW Doppler and/or Color Flow modes run simultaneously with B-Mode in real-time.
  • a 3-D mode in which B-Mode or Color Flow mode information is acquired over a 3-dimensional region and presented in a 3-D surface rendered display can also be used.
  • a line based image reconstruction or "EKV" mode can be used for cardiology or other applications, in which image information is acquired over several cardiac cycles and recombined to provide a very high frame rate display. Line based image reconstruction methods are described in U.S. Patent Application No. 10/736,232, now U.S.
  • Such line based imaging methods image can be incorporated to produce an image when a high frame acquisition rate is desirable, for example when imaging a rapidly beating mouse heart.
  • the transducer can transmit at a pulse repetition frequency (PRF) of at least 500 hertz (Hz).
  • PRF pulse repetition frequency
  • the system can further comprise a processing unit for generating a color flow Doppler ultrasound image from the received ultrasound.
  • the PRF is between about 100 Hz to about 150 KHz.
  • the PRF is between about 100 Hz and about 10 KHz.
  • the PRF can be between about 500 Hz and about 150 KHz.
  • the PRF can be between about 50 Hz and about 10 KHz.
  • a circuit board according to an embodiment of the present invention is adapted to accept an exemplary transducer and that is further adapted to connect to at least one conventional connector.
  • the conventional connector can be adapted to complementarily connect with a cable for transmission and/or supply of required signals.
  • FIGS. 5A-5C show various views of an exemplary circuit board for a 256 element array having a 75 micron pitch.
  • FIGS. 2A-4B an exemplary transducer for use with the exemplary circuit board is illustrated.
  • FIGS. 2A-4B exemplary top, bottom and cross- sectional views of an exemplary schematic PZT stack are shown.
  • FIG.2A shows a top view of the PZT stack and illustrates portions of the ground electric layer that extend from the top and bottom portions of the PZT stack. In one aspect, the ground electric layer extends the full width of the PZT stack.
  • FIG.2B shows a bottom view of the PZT stack. In this aspect, along the longitudinally extending edges of the PZT stack, the PZT stack forms exposed portions of the dielectric layer between individual signal electrode elements.
  • the signal elements extend the full width of the PZT stack.
  • not shown in the underlying "center portion" of the PZT stack are lines showing the individualized signal electrode elements.
  • there is one signal electrode per element of the PZT stack e.g., 256 signal electrodes for a 256-element array.
  • FIG.3A is a top plan view of an interposer for use with the PZT stack of FIGS. 2A-C, comprising electrical traces extending outwardly from adjacent the central opening of the interposer.
  • the interposer further comprises ground electrical traces located at the top and bottom portions of the piece.
  • the interposer can further comprise a dielectric layer disposed thereon a portion of the top surface of the interposer about the central opening of the piece.
  • the dielectric layer defines two arrays of staggered wells, one array being on each side of the central opening and extending along an axis parallel to the longitudinal axis of the interposer.
  • Each well is in communication with an electrical trace of the interposer.
  • a solder paste can be used to fill each of the wells in the dielectric layer such that, when a PZT stack is mounted thereon the dielectric layer and heat is applied, the solder melts to form the desired electrical continuity between the individual element signal electrodes and the individual trances on the interposer. In use, the well helps to retain the solder within the confines of the well.
  • FIG.4A is a top plan view of the PZT stack shown in FIG.2A mounted thereon the dielectric layer of the interposer shown in FIG.3A.
  • FIG.4B provides a top plan view of the PZT stack shown in FIG. 2A mounted thereon the dielectric layer and interposer shown in FIG.3A, in which the PZT stack is shown as a transparency. This provides an illustration of the mounting relationship between the PZT stack and the underlying dielectric layer/interposer, the solder paste mounted therebetween forming an electrical connection between the respective element signal electrodes and the electrical traces on the interposer.
  • FIG.5 A a schematic top plan view of an exemplary circuit board for mounting the transducer of the present invention thereto is illustrated.
  • the circuit board can be flexible.
  • the circuit board comprising a bottom copper ground layer and a KaptonTM layer mounted to the upper surface of the bottom copper ground layer.
  • the circuit board can also comprise a plurality of underlying substantially rigid support structures.
  • a central portion surrounding a central opening in the circuit board can have a rigid support structure mounted to the bottom surface of the bottom copper ground layer.
  • portions of the circuit board to which the connectors can be attached also have rigid support structures mounted to the bottom surface of the bottom copper ground layer.
  • the circuit board further comprise a plurality of board electrical traces formed thereon the top surface of the KaptonTM layer, each board electrical trace having a proximal end adapted to couple to an electrical trace of the transducer and a distal end adapted to couple to a connector, such as, for example, a cable for communication of signals therethrough.
  • a connector such as, for example, a cable for communication of signals therethrough.
  • the length of the circuit forming each electrical trace has a substantially constant impedance.
  • the circuit board also comprises a plurality of vias that pass though the KaptonTM layer and are in communication with the underlying ground layer so that signal return paths, or signal ground paths, can be formed. Further, the circuit board comprises a plurality of ground pins. Each ground pin has a proximal end that is coupled to the ground layer of the circuit board (passing through one of the vias in the Kapton layer) and a distal end that is adapted to couple to the connector.
  • FIG. 5B is a top plan view of an exemplary circuit board for mounting of an exemplary 256-element array having a 75 micron pitch
  • FIG. 5C is a top plan view of the vias of the circuit board of FIG.5B that are in communication with an underlying ground layer of the circuit board.
  • FIG. 5B also defines bores in the circuit board that are sized and shaped to accept pins of the connectors such that, when the connector is mounted thereon portions of the circuit board, there will be correct registration of the respective electrical traces and ground pins with the connector.
  • FIG. 6 illustrates a partial enlarged top plan view of a portion of the exemplified circuit board showing, in Region A, the ground electrode layer of the transducer being wire bonded to an electrical trace on the interposer, which can be, in turn, wire bonded to ground pads of the circuit board.
  • the ground pads of the circuit board are in communication, through vias in the KaptonTM layer, with the underlying bottom copper ground layer.
  • the individual electrical traces of the transducer are wire bonded to individual board electrical traces of the circuit board.
  • the central opening of the circuit board underlies the backing material of the transducer.
  • FIG.7A is an enlarged partial view Region B of an exemplified transducer mounted to a portion of the circuit board.
  • a transducer mounting is shown that does not include an interposer to the substantially rigid central portion of the circuit board.
  • the PZT stack is surface mounted onto the circuit board directly by, for example, means of a series of gold ball bumps.
  • the gold ball bump means is a conventional surface mounting technique and represents another type of surface mounting techniques consistent with the previously mentioned surface mounting techniques.
  • the rigidized central portion of the circuit board can provide the same functionality as the interposer.
  • FIG.11A shows the ground electrode layer of the transducer (without interposer) wire bonded to the ground pads of the circuit board.
  • the gold ball bumps are applied directly onto the circuit board.
  • Each ball bump is positioned in communication with one electrical trace of the circuit board.
  • the PZT stack is secured to the circuit board by, for example and not meant to be limiting, a) use of an underfill, such as a UV curable; b) use of an ACF tape; c) by electroplating pure Indium solder onto the electrodes of either the PZT or the circuit board and reflowing the Indium to provide a solder joint between the signal electrode on the PZT and the gold ball bump on the circuit board, and the like.
  • An arrayed transducer can be operatively connected to the processing unit of the system using the flex circuit as shown in FIGS . 2A- 11.
  • the flex circuit can be operatively connected with a BTH connector.
  • BTH connectors are common and are available in a variety of sizes.
  • the BTH connector comprises a number of pins for mating with a BSH connector.
  • the number of pins can be at least one greater than the number of array elements or traces of flex.
  • the number pins can be equal twice the number of array elements or corresponding traces of flex.
  • 2x180 360 pins can be used for the 256 traces on the flex circuit of a 256 element array.
  • 256 pins can be used for the exemplary 256 element array.
  • the BSH connector can be connectively seated within the BTH.
  • the BSH connector is operatively connected with an interface such as a printed circuit board that is terminated with a plurality of coaxial cables.
  • a larger common cable formed from the plurality of coaxial cables can be terminated with a ZBF end for interfacing with the processing unit of the ultrasound system at a ZIF receptacle or interfacing site.
  • One exemplary ZIF connector that can be used is a 360 Pin DLM6 ITT Cannon ZIFTM connector as available from ITT Corporation of White Plains, NY.
  • alternative ZIFTM connectors can be used for interfacing with the processing unit and can have more or less than 360 pins.
  • the connection can comprise a cable or bundle of cables.
  • the cable can connect each element of the array to the processing unit in a one-to-one relationship; that is, each element can be electrically connected with its own signal and a ground lead to a designated connection point in the processing unit whereby the plurality of individual element connections are bundled together to form the overall cable.
  • each individual electrical connection can be unbundled and not physically formed into a cable or cable assembly.
  • Suitable cables can be coaxial cables, twisted pairs, and copper alloy wiring.
  • Other connection means can be via non-pbysically connected methods such as RP links, infrared links, and similar technologies where appropriate transmitting and receiving components are included.
  • the individual element connections can comprise coaxial cable of a type typically used for connection array elements to processing units.
  • These coaxial cables can be of a low loss type.
  • the coaxial cables typically comprise a center conductor and some type of outer shielding insulated from the center conductor and encased in an outer layer of insulation.
  • These coaxial cables can have nominal impedances appropriate for use with an array.
  • Example nominal impedances can be 50 ohms or more, including 50 ohms, 52 ohms, 73 ohms, 75 ohms or 80 ohms.
  • An exemplary medical cable for use with one or more of the ultrasound imaging systems described herein comprises a minimum of 256 coaxial cables of 40 AWG with a nominal impedance of about 75 ohms with coaxial cable lengths of about 2.0m.
  • the length can be less than 2.0m or greater than 2.0m.
  • the medical cable jacket length can accommodate the cable length, can include additional metal sheaths for electrical shielding and can be made of PVC or other flexible materials.
  • Cables and the connections for connecting an array transducer to the processing unit, including those described herein can be fabricated by companies such as Precision Interconnect - Tyco Electronics (Tyco Electronics Corporation, Wilmington, Delaware).
  • the exemplary cable, at the proximal end, can further comprise of flex/strain relief, 12
  • the exemplary cable at the distal end, can comprise of a flex/strain relief cable terminated to two PCBs, interfacing between the coaxial cables and the flex circuit board, wherein each PCB has 1 BSH-090-01 -L-D-A Samtec Connector (Samtec, Inc., New Albany, IN) and each PCB has 75 Ohm characteristic impedance traces with cables terminated from both sides of the PCB in a staggered layout.
  • the cable can use a "flex circuit" method of securing and connecting a plurality of coax cables which comprise the large cable, hi an exemplary embodiment, the array has 256-elements.
  • the array is mounted in the central region of a flex circuit.
  • the flex circuit has two ends such that the odd numbered elements 1,3,5,7...255 are terminated on the left end of the flex with a BTH-090 connector labeled Jl , and that the even numbered elements 2,4,6,8...256 are terminated on the right end of the flex with a BTH-090 connector labeled J3.
  • the elements are terminated in sequence along the upper and bottom rows of their respective connectors with GND (signal return) pins evenly dispersed across the connector in a repeated pattern.
  • the repeat pattern is defined from the outer edge of the flex towards the central region of the flex and is as follows: 2 signal pins, GND
  • GND 3 signal pins
  • FIG.12A A schematic showing a side view of the folded flex circuit, with the array mounted in the central array of the flex is shown in FIG.12A and an associated pin out table for the connectors on the flex circuit is shown in FIG 12B.
  • the flex circuit can be connected to the exemplary cable described above.
  • the flex circuit can be connected to a Precision Interconnect -Tyco Electronics medical cable assembly.
  • connection from the flex to the ZIFTM connector can be made through two scanhead PCBs followed by a coax cable bundle and 12 short PCBs each with a 2x15 connector inserted into ZIFTM pins.
  • Each scanhead PCB (total of two) can comprise one BSH-090 connector, 128 traces (all traces with controlled impedance of for example 75 Ohms at 30 MHz) and can be terminated with 128 (40 AWG 75 Ohm) coax cables.
  • the PCB can have outer dimensions of 0.525" by 2.344.”
  • FIG. 13 illustrates the design of the two scanhead PCBs.
  • FIG 14 illustrates how the PCBs can be connected to the flex circuit and illustrates the staggered nature of how the coax cable ribbons can be soldered to the PCB.
  • Each scanhead PCB can have one BSH-090 connector. The pin-out for each scanhead trace can be matched to the pin out for the Jl and J3 connector.
  • FIG. 15 A An exemplary medical cable, as partially shown in FIG. 15 A, comprises a ZIF connector on the proximal end, the end of the cable which connects to the processing unit.
  • FIG. 15B illustrates a pin out that can be used for the exemplary ZIF Connector.
  • the pins labeled as G are signal return pins.
  • the pins labeled as N/C are not terminated with coaxial cables and these pins are reserved to be used as either for shielding to chassis ground or for other unspecified functions.
  • the N/C pins can be accessible by simply removing the ZIF housing and soldering to the unused traces on any of the 12 PCBs connected to the ZIF.
  • the 12 individual PCBs used to connect to the ZIF connector have coax cables connected on one or both sides of the board.
  • One edge of the PCB can have a connector suitable for insertion into the ZIF connector (Samtec SSW or equivalent) and each PCB shall have the appropriate traces and vias required to connect the correct coaxial cable to the correct ZIF pin.
  • Each PCB can have a S amtec S S W, or equivalent, connector with two rows of 15 pins, although the number of coax cables may differ on some of the 12 PCBs as defined in the Fig 15B.
  • the general layout of the pins on the 2x15 connector is universal and is shown in Table 1.
  • One of the 12 PCBs requires provisions in the trace layout to include an EEPROM as defined in Fig 15B. Two of the 12 PCBs require some of the pins to be terminated as required to provide the hard-coded PROBE ID number that will identify the particular array design included inside the array assembly.
  • connection methods can be used including connectors of various styles.
  • the impedance can be 75 Ohms at a center frequency of 30 MHz.
  • FIG. 16 is a block diagram illustrating an exemplary high frequency ultrasonic imaging system 1600.
  • the blocks shown in the various Figures can be functional representations of processes that take place within an embodiment of the system 1600. In practice, however, the functions may be carried out across several locations or modules within the system 1600.
  • the exemplary system 1600 comprises an array transducer 1601, a cable 1619, and a processing unit 1620.
  • the cable 1619 connects the processing unit 1620 and the array transducer 1601.
  • the processing unit may comprise software and hardware components.
  • the processing unit can comprise one or more of a multiplexer(MUX)/front end electronics 1602, a receive beamformer 1603, a beamformer control 1604, a transmit beamformer 1605, a system control 1606, a user interface 1607, a scan converter 1608, a video processing display unit 1609, and processing modules including one or more of a M-mode processing module (not shown), a PW Doppler processing module 1611, a B-mode processing module 1612, a color flow processing module 1613, a 3-D mode processing module (not shown), and a RF mode processing module 1615.
  • the center frequency range of the exemplary system can be about 15-55 MHz or higher. When measured from the outside edge of the bandwidths, the frequency range of the exemplary system can be
  • the array transducer 1601 interfaces with the processing unit 1620 at the MUX/front end electronics (MUX/FEE) 1602.
  • the MUX portion of the MUX/FEE 1602 is a multiplexer which can electronically switch or connect a plurality of electrical paths to a lesser number of electrical paths.
  • the array transducer 1601 converts electrical energy to ultrasound energy and vice versa and is electrically connected to the MUX/FEE 1602.
  • the MUX/FEE 1602 comprises electronics which generate a transmit waveform which is connected to a certain subset of the elements of the array, namely the elements of the active aperture. The subset of elements is called the active aperture of the array transducer 1601.
  • the electronics of the MUX/FEE 1602 also connects the active aperture of the array to the receive channel electronics. During operation, the active aperture moves about the array transducer 1601 , in a manner determined by components described herein.
  • the MUX/FEE 1602 switchably connects the elements of the active aperture to transmit and receive channels of the exemplary system.
  • the up to 64 elements of the active aperture are contiguous.
  • Other embodiments of the invention share the MUX for both the transmit channels and the receive channels.
  • the front end electronics portion of the MUX/FEE 1602 supply a high voltage signal to the elements of the active aperture of the array transducer 1601.
  • the front end electronics can also provide protection circuitry for the receiver channels to protect them from the high voltage transmit signal, as the receive channels and the transmit channels have a common connection point at the elements of the array transducer 1601.
  • the protection can be in the form of isolation circuitry which limits the amount of transmit signal that can leak or pass into the receive channel to a safe level which will not cause damage to the receive electronics.
  • Characteristics of the MUX/FEE 1602 include a fast rise time on the transmit side, and high bandwidth on the transmit and receive channels.
  • the MUX/FEE 1602 passes signals from the transmit beamformer 1605 to the array transducer 1601.
  • the transmit beamformer 1605 generates and supplies separate waveforms to each of the elements of the active aperture.
  • the waveform for each element of the active aperture is the same.
  • the waveforms for each element of the active aperture are not all the same and in some embodiments have differing center frequencies.
  • each separate transmit waveform has a delay associated with it. The distribution of the delays for each element's waveform is called a delay profile. The delay profile is calculated in a way to cause the desired focusing of the transmit acoustic beam to the desired focal point.
  • the transmit acoustic beam axis is perpendicular to the plane of the array 1601, and the beam axis intersects the array 1601 at the center of the active aperture of the array transducer 1601.
  • the delay profile can also steer the beam so that it is not perpendicular to the plane of the array 1601.
  • a delay resolution of 1/16 can be used. Or, in other words, 1/16 of the period of the center frequency of the transmit center frequency, though other delay resolutions are contemplated within the scope of this invention. For example at a 50 MHz center frequency, the period is 20 nanoseconds, so 1/16 of that period is 1.25 nanoseconds, which is the exemplary delay resolution used to focus the acoustic beam.
  • the delay resolution may be different than 1/16 th of a period, for example delay resolutions less than 1/16 th (e.g., 1/24, 1/32, etc) as well as delay resolutions greater than 1/16 (e.g., 1/12, 1/8, etc.) are contemplated within the scope of this invention.
  • the receive beamformer 1603, can also be connected to elements of the active aperture of the array transducer 101 via the MUX/FEE 1602. During transmit an acoustic signal penetrates into the subject and generates a reflected signal from the tissues of the subject. The reflected signal is received by the elements of the active aperture of the array transducer 1601 and converted into an analog electrical signal emanating from each element of the active aperture. The electrical signal is sampled to convert it from an analog to a digital signal in the receive beamformer 1603. Embodiments of the invention use quadrature sampling for digitization of the received signal.
  • the array transducer 1601 also has a receive aperture that is determined by the beamformer control 1604, which tells the receive beamformer 1603 which elements of the array to include in the active aperture and what delay profile to use.
  • the receive beamformer 1603 of the exemplary embodiment is a digital beamformer.
  • the receive beamformer 1603 introduces delays into the received signal of each element of the active aperture.
  • the delays are collectively called the delay profile.
  • the receive delay profile can be dynamically adjusted based on time-of-flight - that is, the length of time that has elapsed during the transmission of the ultrasound into the tissue being imaged.
  • the time-of-flight is used to focus the receive beamformer to a point of focus within the tissue.
  • the depth of the receive beam is adjusted using a delay profile which incorporates information pertaining to the time-of-flight of the transmitted beam.
  • the received signal from each element of the active aperture is summed wherein the sum incorporates the delay profile.
  • the summed received signal flows along the receive channel from the receive beamformer 1603 to one or more of the processing module(s) 1611, 1612, 1613, and/or 1615, including those not shown in FIG. 16), as selected by the user interface 1607 and system controls 1606, which act based upon a user input.
  • the beamformer control 1604 is connected to the MUX/FEE 1602 through the transmit beamformer 1605 and the receive beamformer 1603. It is also connected to the system control 1606.
  • the beamformer control 1604 provides information to the MUX/FEE 1602 so that the desired elements of the array transducer 1601 are connected to form the active aperture.
  • the beamformer control 1604 also creates and sends to the receive beamformer 1603 the delay profile for use with the reception of a particular beam. In embodiments of the invention, the receive delay profile can be updated repeatedly based upon the time of flight.
  • the beamformer control 1604 also creates and sends to the transmit beamformer 1605 the transmit delay profile.
  • the system control 1606 operates in a manner known to one of ordinary skill in the art.
  • the scan converter 1608 operates in a manner known in the art and takes the raw image data generated from the one or more of the processing modules and converts the raw image data into an image that can be displayed by the video processing/display 1609. For some processing modes of operation, the image can be displayed without using the scan converter 1608 if the video characteristics of the image are the same as those of the display.
  • the processing modules function in a manner known to one of ordinary skill in the art.
  • the pulse repetition frequency (PRF) can be high due to the high center frequencies of embodiments of this invention.
  • the maximum unaliased velocities which may be measured are proportional to the PRF and inversely proportional to the transmit center frequency.
  • the PRJFs required to allow for the unaliased measurement of specific velocities given specific transmit center frequencies may be calculated in a method known to one of ordinary skill in the art.
  • the transmit center frequencies used are in the range of 15 to 55MHz, or higher, and the blood flow velocities can be as high as 1 m/s and in some cases greater than 1 m/s unaliased measurement of the Doppler signal resulting from those velocities will require the PRF for PW Doppler to be up to 150KHz.
  • Embodiments of the invention have a PW Doppler mode which supports PRFs up to 150 KHz, which for a center frequency of 30 MHz allows for unaliased measurement of blood velocities up to 1.9 m/s in mice with a zero degree angle between the velocity vector of the moving target and the ultrasound beam axis.
  • the RF module 1615 uses interpolation. If the sampling method used is quadrature sampling, then the RF signal may be reconstructed from the quadrature baseband samples by zero padding and filtering, as would be known to one of ordinary skill in the art. If Nyquist sampling is used, then no reconstruction is required since the RF signal is sampled directly. In certain embodiments, the RF module 1615 reconstructs the RF signal from the quadrature samples of the receive beamformer output. The sampling takes place at the center frequency of the receive signal, but in quadrature, giving a baseband quadrature representation of the signal. The RF signal is created by first zero padding the quadrature sampled data stream, with the number of zeros determined by the desired interpolated signal sample rate.
  • a complex bandpass filter is applied to the zero padded data stream, which rejects the frequency content of the zero padded signal that is outside the frequency band from fs/2 to 3fs/2, where fs is the sample frequency.
  • the result after filtering is a complex representation of the original RF signal.
  • the RF signal is then passed on to the main computer unit for further processing such as digital filtering and envelope detection and display.
  • the real part or the complex representation of the RF signal may be displayed. For example, the RF data acquired for a particular scan line may be processed and displayed. Alternatively, RF data from a certain scan line averaged over a number of pulse echo returns can be displayed, or RF data acquired from a number of different scan lines can be averaged and displayed.
  • the scan lines to be used for acquisition of the RF data can be specified by the user based on evaluation of the B-Mode image, by placing cursor lines overlaid on the B-Mode image.
  • a Fast Fourier Transform (FFT) of the RF data can also be calculated and displayed.
  • the acquisition of RF data and the acquisition of B-Mode data can be interleaved so as to allow for the display of information from both modes concurrently in real time.
  • the acquisition of physiological signals such as the ECG signal can also occur concurrently with the acquisition of RF data.
  • the ECG waveform can be displayed while the RF data is acquired.
  • the timing of the acquisition of RF data can be synchronized with user defined points within the ECG waveform, thereby allowing for the RF data to be referenced to specific times during a cardiac cycle.
  • the RF data can be stored for processing and evaluation at a later time.
  • FIG. 17 shows ablock diagram of the system 1600 further illustrating components of an embodiment of the invention.
  • the array transducer 1601 is connected to the front end transformer 1702 via a cable 1619.
  • the cable 1619 comprises signal pathways from the elements of the array transducer 1601 to the front end transformers 1702.
  • An exemplary embodiment of the cable is described herein and comprises individual micro-coax cables.
  • connectors can be used on one or both ends of the cable 1619.
  • a connector with pins equal to twice the number of elements can be used and an exemplary connector is described herein.
  • a signal and a ground path can be used for each element of the array transducer 1601 a signal and a ground path can be used. In other embodiments of the invention, the ground connection is shared for a grouping of elements. .
  • FIG. 17 provides representative details of the circuitry for four elements of the array transducer 1601 as examples for the larger system 1600 wherein there is a front end transformer
  • the front end transformers 1702 and transmit output stages 1703 are more fully described below.
  • the electrical signal from an element of the array transducer 1601 passes through the front end transformer 1702 into the receive multiplexer 1704.
  • the receive multiplexer 1704 selects which element and front end transformer are connected to the receive channel 1705.
  • the receive channel 1705 comprises a low noise amplifier and a time gain control, both more fully described below.
  • the signal then passes from the receive channel 1705 into the analog-to-digital conversion 1706 module where it is digitized.
  • the digital received signal then passes into the receive beamformer 1707, which is a digital beamformer.
  • a delay profile generated in the beamformer control is applied to the received signal.
  • the signal from the received beamformer 1707 travels into the synthetic aperture memory 1710.
  • the synthetic aperture memory adds the received data from two successive ultrasound lines.
  • An ultrasound line is considered to be the data resulting from returning ultrasound echoes that is received after the transmission of an ultrasound pulse into tissue.
  • Synthetic aperture imaging performs as one of ordinary skill in the art would understand. In part, synthetic aperture imaging refers to a method of increasing the effective size of the transmit or receive aperture. . For example, if there are 64 channels in the beamformer, during the reception of one line of ultrasound data, up to 64 transmit channels and 64 receive channels can be used. Synthetic aperture imaging will use two lines of ultrasound data, added together. The first ultrasound line can be acquired with a receive aperture which can span elements 33 to 96.
  • the second ultrasound line is received with an aperture segmented into two blocks, located at elements 1 to 32 and 97 to 128. Both ultrasound lines use the same transmit aperture.
  • the receive aperture consisted of 128 channels located at elements 1 to 128, provided that there is no appreciable motion of the tissue being imaged during the time required to acquire the two lines of ultrasound data.
  • two ultrasound lines were required rather than just one, so the frame rate is lowered by a factor of two.
  • the two receive apertures can be arranged in a different way, as long as together they form a 128 element aperture. Alternatively, the transmit aperture size can be increased while keeping the receive aperture the same.
  • More than 2 ultrasound lines can be used to increase the aperture by more than a factor of two.
  • the signal from the synthetic aperture memory 1710 is then stored in the RF cine buffer 1713, which is a large memory that sores many received RF lines, as controlled by the asynchronous processing control module 1714.
  • the buffered receive signal is then read into the signal processing unit 1715 at an appropriate rate.
  • the signal processing unit 1715 may be implemented with a dedicated CPU on the beamformer control board.
  • the received signal passes from the signal processing unit 1715 to the computer unit 1717 where it is further processed according to the mode selected by the user.
  • the processing of the received signal by the computer unit 1717 is generally of the type known to a person of ordinary skill in the art, with exceptions as noted herein.
  • the computer unit 1717 comprises system software configured to process signals according to the operation mode of the system.
  • the system software in the main computer unit 1717 may be configured to carry out B- Mode processes which may include, for example, preprocessing; persistence processing; cineloop image buffer; scan conversion; image pan; zoom and postprocessing.
  • the system software in the main computer unit 1717 may also be configured to carry out processes for color flow imaging (CFI), which may include, for example, threshold decision matrix; estimate filtering; persistence and frame averaging; cineloop CFI image buffer; scan conversion; color maps and priority.
  • CFI color flow imaging
  • the system software in the main computer unit 1717 may also be configured to carry out processes for PW Doppler, which may include, for example spectral estimation (FFT); estimate filtering; cineloop spectral data buffer; spectral display generation; postprocessing and dynamic range and audio processing.
  • PW Doppler may include, for example spectral estimation (FFT); estimate filtering; cineloop spectral data buffer; spectral display generation; postprocessing and dynamic range and audio processing.
  • FFT spectral estimation
  • estimate filtering estimate filtering
  • cineloop spectral data buffer spectral display generation
  • postprocessing and dynamic range and audio processing spectral display generation
  • the user interface panel 1720 is similar to the standard user interface found on most clinical ultrasound systems.
  • the B-Mode user interface may have image format controls that include image depth; image size; dual image activate; dual image left/right select; flip image left/right; flip image up/down and zoom.
  • Transmit controls may include transmit power (transmit amplitude); transmit focal zone location; number of transmit zones selection; transmit frequency and number of cycles.
  • Image optimization controls may include; B-Mode Gain; TGC sliders; preprocessing; persistence; dynamic range; frame rate/resolution control and post-processing curves.
  • a color flow imaging user interface may have image format controls that may include color flow mode select (e.g., color flow velocity, Power Doppler, Tissue Doppler); trackball; steering angle; color box position/size select (after selection trackball is used to adjust position or size); preset recall; preset menu and invert color map.
  • Transmit controls may include transmit power (transmit amplitude); transmit focal zone location and transmit frequency.
  • Image optimization controls may include; color flow gain; gate size; PRF (alters velocity scale); clutter filter select; frame rate/resolution control; preprocessing select; persistence; dynamic range (for Power Doppler only) and color map select.
  • PW Doppler user interface which may have PW Doppler fo ⁇ nat controls that may include PW Doppler mode select; trackball; activate PW cursor (trackball is used to adjust sample volume position); sample volume size; Doppler steering angle; sweep speed; update (selects either simultaneous or interval update imaging); audio volume control and flow vector angle.
  • Transmit controls may include transmit power (transmit amplitude) and transmit frequency.
  • Spectral Display optimization controls may include PW Doppler gain; spectral display size; PRF (alters velocity scale); clutter filter select; preprocessing and dynamic range.
  • An exemplary M-Mode user interface may have image format controls including M-Mode cursor activation; trackball (used to position cursor); strip size and sweep speed. Transmit controls may include transmit power (transmit amplitude); transmit focal zone location; transmit frequency and number of cycles. Image optimization controls may include M-Mode gain; preprocessing; dynamic range and post-processing.
  • An exemplary RF Mode user interface may have, for example, RF line acquisition controls that may include RF line position; RF gate; number of RF lines acquired; RF region activate; RF region location; RF region size; number of RF lines in region; averaging; and B- Mode interleave disable.
  • Transmit controls may include transmit power (transmit amplitude); transmit focal zone location; transmit f-number; transmit frequency; number of cycles; acquisition PRF and steering angle.
  • Receive processing controls may include RF Mode gain; filter type, order; window type and number of lines averaged. The digital samples of the received signal are processed at a rate which is generally different from the rate at which the data is acquired.
  • the processing rate is the rate at which data is displayed, typically about 30 frames per second (fps.)
  • the data can be displayed at a rate up to the acquisition rate or can be displayed at less than about 30 fps.
  • the data can be acquired at much faster frame rates, in certain embodiments of the invention at about 300 frames per second, or at a speed necessary to acquire the diagnostic information desired.
  • image date of a rapidly moving anatomical structures such as a heart valve can be acquired using a faster frame rate and then can be displayed at a slower frame rate.
  • Data acquisition rates can be less than 30 fps, 30 fps, or more than 30fps.
  • data acquisition rates can be 50, 100, 200, or 300 or more fps.
  • the display rate can be set such that it does not exceed that which the human eye can process. Some of the frames which can be acquired can be skipped during display, although all of the data from the receive beamformer output is stored in an RF data buffer such as the RF cine buffer 1713.
  • the data is sometimes referred to as RF data or by the sampling method used to acquire the data, (for instance in the case of quadrature sampling, the data can also be referred to as baseband quadrature data).
  • the quadrature or RF data is processed prior to display. The processing maybe computationally intensive, so there are advantages to reducing the amount of processing used, which is accomplished by processing only the frames which are to be displayed at the display rate, not the acquisition rate.
  • the frames that were skipped over during display can be viewed when live imaging stops or the system is "frozen. "
  • the frames in the RF buffer 1713 can be retrieved, processed, and played back at a slower rate, e.g., if the acquisition rate is 300 frames per second, the play back of every frame at 30 frames per second would be 10 times slower than normal, but would allow the operator to view rapid changes in the image.
  • the playback feature is usually referred to as the "Cineloop" feature by persons of ordinary skill in the art. Images can be played back at various rates, or frame by frame, backwards and forwards.
  • the system 1600 shown in FIG. 17 can also comprise various items which one of ordinary skill in the art would recognize as being desirable for the function of the system, such as clocks 1712, memory, sound card and speakers, video card and display, etc. and other functional blocks as shown in FIG. 17.
  • FIGS. 18a and 18b provide additional detail of an embodiment of the MUX/Front End Electronics 1702, 1703, 1704, 1708 and the receive beamformer 1707 and transmit beamformer 1709 functions according to an embodiment of the present invention.
  • a channel for instance a receive channel
  • channel 1 1801 may be switchably connected to elements numbered 1 , 65, 129, and 193 in FIG. 18a so that only one of those four elements are connected to channel 1 1801 at any given time.
  • the assignment of four switchably connected elements to a channel is done such that contiguous elements of any given subset of elements can comprise the active aperture. For example, if the array transducer were comprised of 256 elements, then 64 or less elements can form the subset that comprises the active aperture.
  • the multiplexing ofthe elements ofthe array transducer 1601 for the receive cycle can be carried out by a RX switch 1817 as shown in an exemplary diagram (FIG. 18b) ofthe front end 1802.
  • a control signal 1818 from the beamformer control 1711 determines which RX switch 1817 is activated, thereby connecting the chosen element ofthe four (4) available elements for that module 1802 to the receive channel.
  • the multiplexing scheme illustrated in FIGS. 18a and 18b can be applied to transducers of varying numbers of elements (other than 256 elements) and of varying maximum active aperture sizes (other than up to 64 elements).
  • the exemplary front end 1816 shown in FIG. 18b also comprises the transformer 1819 and pulser 1820, which are described in more detail below.
  • the front end 1816 provides isolation of the receive channel from the transmit waveform, discussed previously herein.
  • the received signal from the selected array transducer element passes into the low noise amplifier (LNA) 1804. From the LNA 1804, the then amplified signal passes into time gain control (TGC) 1805. Since elapsed time is proportional to the depth ofthe received reflected signals, this is also referred to as a depth dependent gain control. In an ultrasound system, as time goes by from the transmission of an ultrasound wave, the signal passes deeper into the tissue and is increasingly attenuated; the reflected signal also suffers this attenuation. The TGC 1805 amplifies the received signal according to a time varying function in order to compensate for this attenuation.
  • LNA low noise amplifier
  • TGC time gain control
  • the factors which can be used to determine the time varying TGC gain are time of flight, tissue characteristics ofthe subject or subject tissue under study, and the application (e.g. imaging modality).
  • the user may also specify gain as a function of depth by adjusting TGC controls on the user interface panel 1607.
  • Embodiments may use, for example, an Analog Devices (Norwood, MA) AD8332 or similar device to perform the LNA 1804 and TGC 1805 functions. From the TGC 1805, the receive signal passes into the receive beamformer 1803 where it is sampled by a sampler, in this embodiment, the analog-to-digital converters 1807 and 1808.
  • only one analog-to-digital converter is used if sampling is done at a rate greater than the Nyquist rate; for instance at 2 or 3 times the Nyquist rate, where the Nyquist rate involves sampling the ultrasound signals from the individual elements at a rate which is at least twice as high as the highest frequency in the signal.
  • quadrature sampling is employed and two analog- to-digital converters are used, namely the "I' and the "Q" sampler.
  • the receive signal is digitized in blocks 1807 and 1808 using quadrature sampling analog-to-digital converters (ADC); two ADCs are required per channel, with sampling clocks shifted 90° out of phase.
  • ADC quadrature sampling analog-to-digital converters
  • the sample rate used can be the center frequency of the receive signal.
  • direct sampling would use a sampling rate in theory of at least twice the highest frequency component in the receive signal, but practically speaking at least three times the sampling rate is preferred. Direct sampling would use one ADC per channel.
  • the digitized received signal can undergo a correction for the DC offset of the
  • ADC This is implemented by the subtraction of a value equal to the measured DC offset at the ADC output.
  • Each ADC may have a different DC offset correction value.
  • the DC offset may be determined by averaging a number of digital samples appearing at the output of the ADC with no signal present at the receive channel input, for example, during a calibration period at system start up.
  • the digitized signal next passes into a FIFO buffer 1822 where each sample is stored for an appropriate duration so that the appropriate delay profile can be implemented.
  • the delay can be implemented in both coarse and fine manners.
  • a coarse delay can be implemented by shifting the signal by one or more sample points to obtain the desired delay. For instance, if the desired delay is one sample period, then shifting by one sample in the appropriate direction provides a signal with the appropriate delay.
  • a fine delay can be implemented using an interpolation filter 1809. From the FIFO buffer 1822, the digitized received signal passes into the interpolation filter 1809 for the calculation of any fine delay.
  • the interpolation filter 1809 is used in a system where the sample period is greater than the appropriate fine delay resolution. For instance, if the sample rate is the center frequency of the ultrasound signal and is 50 MHz, the sample rate is one sample every 20 nanoseconds. However, a delay resolution of 1.25 nanoseconds (1/16 of 20 nanoseconds) is used in certain embodiments to provide the desired image quality, though other delay resolutions are contemplated within the scope of this invention.
  • the interpolation filter 1809 is used to calculate a value for the signal at points in time other than the sampled point.
  • the interpolation filter 1809 is applied to the in-phase and quadrature portions of the sampled signal.
  • Embodiments of the interpolation filter 1809 comprise a finite impulse response (FUR) filter.
  • the coefficients of each filter can be updated dynamically by the beamformer control module based on the time of flight, sample by sample.
  • a phase rotation can be applied by a multiplier 1811 multiplying the in-phase and quadrature components by the appropriate coefficients.
  • the phase rotation is used to incorporate into the interpolated sample the correct phase relative to the ADC sample frequency.
  • the RX controller 1810 controls the FIFO modules and the interpolation filters.
  • the receive delay is updated dynamically, so the interpolation filter coefficients at each channel need to change at certain intervals.
  • the delay implemented by the FIFO also needs to change at certain intervals.
  • the receive aperture size is adjusted dynamically, so each channel becomes active at a specific time during the reception of the ultrasound signal; a channel is activated by multiplying by 1 instead of 0 at the "multiply" module 1811.
  • the multiply module 1811 can also apply a "weight" which is a value between 0 and 1, independently to each channel in the receive aperture. This process, which is known as apodization, is known to one skilled in the art.
  • the value by which the interpolated sample is multiplied by may vary with time, so as to implement an apodized receive aperture which expands dynamically during the reception of the ultrasound signal.
  • FIG. 18c is an exemplary embodiment of a receive controller (RX controller) in an embodiment according to the present invention.
  • the Receive Controller 1810 is used to program the correct delay profile, aperture size and receive apodization data into the processing block 1809 which implements the interpolation and phase rotation and apodization.
  • the Receive Controller 1810 of FIG. 18c sets the initial parameters (Initial Coarse Delay, Initial Phase) once per start-of-line (SOL) trigger and sets the dynamic parameters (Dynamic Focus, Dynamic Apodization) once per receive clock (RXCLK) period.
  • the initial receive delay profile is stored in RX Initial Aperture Memory 1822.
  • the dynamic receive delay profile is stored in the RX Dynamic Aperture Memory 1824.
  • the delay profile is loaded into the RXBF Buffer 1826 via the 64: 16 Crosspoint Switch 1828 before the SOL trigger.
  • the crosspoint switch 1828 selects 16 of the 64 aperture channel configurations. These are used to program the 16 receive channels that are on a single Channel board.
  • the configuration for each receive line is stored in the Line Memory 1830.
  • Each line configuration in the Line Memory 1830 contains the Aperture Select Index, the Mode Select, and the Aperture Enable.
  • the Aperture Select index is used to determine the Aperture to Channel mapping.
  • the Mode Select is used to access multiple delay profiles.
  • the Aperture Enable index controls the initial aperture size.
  • the aperture select look-up table (AP_SEL LUT) 1832 is a method to reduce the number of possible configurations and therefore number of bits required to store in the line memory.
  • the AP_SEL LUT 1832 is re-programmable.
  • the Memory Control 1834 is a state machine that decodes the line configuration.
  • the state machine is configured by the Control and Status memory 1836. It is configured differently for different modes (e.g. B-Mode, Color Flow Mode, PW Doppler Mode, etc.).
  • the Memory Control 1834 controls the loading of the aperture memory into the RXBF Buffer 1826 and generates the SOL_delayed and FIFO_WEN signals.
  • the pulse SOL_delayed is used to transfer the initial delay parameters into the RX Phase Rotation and RX Apodization block 1809 in a single RXCLK period.
  • the dynamic receive parameters are then transferred in each subsequent RXCLK period.
  • the FIFO_WEN signal starts the receive ADC data acquisition into the FIFO for the RX interpolation filter.
  • the Control and Status Memory 1836 also contains common parameters such as the Receive Length.
  • the Receive Length parameter determines how many receive samples to collect for each line. It is to be appreciated that increasing the number of receive channels allows for larger receive apertures, which can benefit deep imaging by improving lateral resolution and penetration.
  • the synthetic aperture mode allows for apertures greater than 64 to be used, but at the expense of a reduction in frame rate. With an increase in the number of receive channels, this can be done without a frame rate penalty.
  • the receive beamformer 1803 allows for multi-line beamforming. Multi-line beamforming allows for higher frame rates by processing multiple receive lines in parallel. Frame rate increases by a factor equal to the number of parallel receive lines.
  • the signal from each receive beamformer 1803 is then summed by summers 1815.
  • the summed signal represents a received signal at a given time that is reflected from a given depth.
  • the summed received signal is then routed through modules described earlier and shown in FIG. 17, to the appropriate processing module for the mode of operation selected by the user.
  • transmit channel 1 1801 can be switchably connected to the transmit output stages corresponding to elements numbered 1, 65, 129, and 193 in FIGS. 18a and 18b so that only one of those four transmit output stages are connected to transmit channel 1 1801 at any given time.
  • transmit channel 2 can be switchably connected to the transmit output stages corresponding to elements 2, 66, 130 and 194, and so on. This is the performance of the multiplexing function of the MUX/Front End Electronics 1702, 1703, 1704, 1708 during the transmit cycle of the system.
  • the transmit signal which is multiplexed is the pair of signals designated by TXA 2002 and TXB 2004, which drive the gates of the transmit pulser MOSFETs QTDN 2006 and QTDP 2008 as shown in FIG.20.
  • These signals 2002, 2004 are unipolar signals of a sufficiently low level so that multiplexing by MOSFET type switches can be used.
  • the assignment of four switchably connected transmit output stages to a transmit channel is done such that contiguous elements of any given subset of elements can comprise the active transmit aperture. For example, in an array transducer comprised of 256 elements, 64 or less elements can form the subset that comprises the active transmit aperture.
  • the transmit multiplexing can be done after the transmit output stage using multiplexing circuitry able to accommodate a higher voltage bipolar signal.
  • the transmit beamformer 1812 generates the transmit waveform with the specified delay present in the waveform in that the waveform is not sent until the appropriate time per the delay profile.
  • the transmit waveform can be a low voltage signal, including a digital signal.
  • the transmit waveform can be a high voltage signal used by the array transducer to convert electrical energy to ultrasound energy.
  • the operation of the transformer 1819 and pulser 1820 are described in greater detail below.
  • one or more of each of the transmit channels within the active transmit aperture can produce a transmit waveform which can be delayed relative to a reference control signal. The number of transmit channels determines the maximum transmit aperture size.
  • the array transducer has 64 transmit channels or may have 96 or 128 transmit channels.
  • the delays can vary from channel to channel, and collectively the delays are referred to as the transmit delay profile.
  • Transmit beamforming may also include the application of a weighting function to the transmit waveforms, a process known to one of ordinary skill in the art as "apodization.”
  • Transmit apodization uses independent control of the amplitude of the transmitted waveform at each channel.
  • the benefit to image quality is improved contrast resolution due to a reduction in spurious lobes in the receive beam profile, which can be either side lobes or grating lobes.
  • Each transmitter output stage can have an independently controlled supply voltage, and control hardware.
  • Transmit waveshaping involves the generation of arbitrary waveforms as the transmit signal, i.e., the modulation of amplitude and phase within the transmit waveform.
  • the benefit is an improvement to axial resolution through shaping of the transmit signal spectrum.
  • Techniques such as coded excitation can be used to improve penetration without loss of axial resolution.
  • the transmit beamformer 1812 described herein may be implemented in one embodiment with an FPGA device.
  • a typical implementation of a transmit beamformer 1812 which provides a delay resolution of, for example, 1/16 the transmit clock period may require a clock which is 16 times the transmit clock frequency. For the frequency range of the system described here, this would imply a maximum clock frequency of 16 times 50 MHz, or 800 MHz, and a typical FPGA device may not support clock frequencies at that rate.
  • the transmit beamformer 1812 implementation described below uses a clock frequency within the FPGA of only eight (8) times the transmit clock frequency.
  • Each channel of the transmit beamformer is comprised of a TX controller 1814 and a Tx pulse generator 1813.
  • the TX controller 1814 uses a parameter called, for example, an ultrasound line number (also known as a ray number), to select the active transmit aperture through the appropriate configuration of the transmit multiplexer.
  • the ray number value identifies the origin of the ultrasound scan line with respect to the physical array. Based on the ray number, a delay value is assigned to each transmit channel in the active transmit aperture.
  • the TX pulse generator 1813 generates a transmit waveform for each transmit channel using waveform parameters and control signals as described herein.
  • FIG. 18d is an illustration of an exemplary transmit controller (TX controller) in an embodiment according to the present invention.
  • the transmit controller 1814 is used to program the TX pulse generator 1813 with the correct delay profile (coarse delay and fine delay for each channel) and transmit waveform for each line. It re-programs the TX pulse generator 1813 before each line.
  • the sequence of lines is used to produce a 2-D image. Each line requires a certain subset of the array elements to be used to form the transmit aperture. Each array element within the aperture must be connected to a channel in the TX pulse generator 1813, and the transmit channels must be configured to produce the desired transmit waveforms with delays according to the desired transmit delay profile.
  • the delay profile and transmit waveform for the entire aperture is stored in the TX Aperture Memory 1838.
  • Multiple delay profiles can be stored in the TX Aperture Memory 1838. Multiple delay profiles are required for B-Mode imaging in which multiple focal zones are used, and PW Doppler and Color Flow Imaging modes in which the Doppler mode focal depth and transmit waveforms are different than those used for B-Mode.
  • the TX Aperture Memory 1838 contains delay profile and transmit pulse wave shape data for a 64 channel aperture. On each Channel Board there are 16 transmit channels, each of which can be connected to one of four different array elements through a transmit output stage. A 64:16 crosspoint switch 1840 is used to route the correct transmit waveform data sets to each of the 16 channels.
  • the control of the other 48 channels is implemented on the other 3 channel boards.
  • the TXBF buffer 1842 temporarily stores the TX pulse generator data before the start of line (SOL) trigger.
  • the TXJTRG trigger moves the data from the TXBF Buffer 1842 into the TX Pulse generator 1813 in one TXCLK period.
  • the configuration for each transmit line is stored in the Line Memory 1844.
  • Each line configuration in the Line Memory 1844 contains the following information: Aperture Select Index, Mode Select, Aperture Enable Index, and Element Select Index.
  • the Aperture Select index is used to determine the Aperture to Channel mapping.
  • the Mode Select is used to access multiple delay profiles.
  • the Aperture Enable index controls the aperture size.
  • the Element Select index controls which element is active in the case that there are more array elements than transmit channels or receive channels.
  • Element Select look-up tables (AP_SEL LUT 1846, AP_EN LUT 1848, ES LUT 1850) is a method to reduce the number of possible configurations and therefore number of bits required to store in the line memory 1844.
  • the look-up tables are all re-programmable.
  • the Control and Status memory 1852 contains common parameters such as the number of transmit cycles (TX Cycles), the number of lines in the frame, and also configures the state machine in the Memory Control block 1854.
  • Memory Control 1854 is a state machine that decodes the Aperture Select, Aperture Enable and Element Select line information. Referring to FIG.
  • the transmit waveform is actually two signals, referred to as the "A” and "B” signals, one of which is applied to the gate of pulser drive MOSFET QTDN 62006 and the other applied to the gate of pulser drive MOSFET QTDP 2008.
  • the "B” signal can be identical to the "A” signal except that it is delayed by Vi the period of the transmit clock.
  • the delay applied to each transmit waveform is divided into two components, the “coarse delay” and the "fine delay".
  • the coarse delay can be in units of 1 A of the transmit frequency period
  • the fine delay can be in units of 1 / 16 the transmit frequency period, though other units of fine delay are contemplated within the scope of this invention.
  • the transmit center frequency is the transmit center frequency, pulse width, number of cycles and the "dead time".
  • the "dead time” is the time interval following the first half cycle of the output pulse in which neither of the two output stage MOSFETs, QTDN 2006 and QTDP 2008, are turned on. Alteration of the transmit center frequency, pulse width and dead time may be used to alter the frequency content of the final transmit signal to the transducer element.
  • one transmit pulse generation circuit 2200 is used for each transmit beamformer channel.
  • a 16 bit A waveshape word 2202 is used to encode the fine delay, pulse width and dead time for the A signal.
  • a 16 bit B waveshape word 2203 is used to encode the fine delay, pulse width and dead time for the B signal.
  • the waveshape words 2202, 2203 can be stored in memory within, for example, a FPGA.
  • the frequency of the transmit output signal is determined by the frequency of the transmit clock.
  • the control inputs come from the transmit controller 1814, which can be implemented within the FPGA. These can be the pulse count 2204, the TXTRG 2206, and various clocks, as described below, and shown in FIGS. 22-22C .
  • Transmit pulse generation begins when a TXTRG pulse 2206 is received from the channel control board 1814.
  • the TXTRG signal 2206 is sent to the transmit beamformer channels, and is the signal which the transmit beamformer delays are referenced to.
  • the TXTRG pulse 2206 begins the counting of 1 A intervals of the transmit frequency clock cycle denoted by TXCLKX2 2246.
  • the current hardware implementation uses a clock of 2 times the transmit clock.
  • the coarse delay 2210 is implemented by a Coarse Delay counter 2248 which is clocked by a clock, TXCLKX2 2246..
  • the signal TXTRG 2206 causes the count to begin.
  • a COARSE DONE signal 2208 is generated when the number of clock cycles of
  • TXCLKX22246 has reached the coarse delay input variable value 2210.
  • the COARSE DONE signal 2208 enables the byte select circuit composed of multiplexers 2250 and 2252, Pulse Inversion select Circuit composed of multiplexers 2254 and 2256, and the 8: 1 parallel-to-serial circuits 2212 and 2213.
  • the 16 bit waveshape words 2202 and 2203 are transferred into 16 bit registers 2216 and 2217.
  • the output of the A waveshape register 2216 is composed of the Partial Waveshapes: Partial_Waveshape_A(7:0) 2260 and Partial_Waveshape_A(15:8) 2261.
  • Partial_Waveshape_A(7:0) 2260 is transferred to the either 8:1 parallel-to-serial circuit 2212 or 8:1 parallel-to-serial circuit 2213 through the Pulse Inversion Circuit composed of multiplexers 2254 and 2256.
  • Partial_Waveshape_A(15:8) 2261 is transferred to the either 8:1 parallel-to-serial circuit 2212 or 8:1 parallel-to-serial circuit 2213 through the Pulse Inversion Circuit composed of multiplexers 2254 and 2256.
  • the Byte Select signal 2214 controls which of Partial_Waveshape_A(7:0)2260or Partial_Waveshape_A(15:8) 2261 is multiplexed through to the Pulse Inversion Circuit. In this way, the full 16 bits of Waveshape_A 2202 is transferred to the 8:1 parallel-to-serial circuits for serialization into a one bit data stream.
  • the 8:1 parallel-to-serial circuit 2212 and 2213 have double data rate (DDR) outputs.
  • COARSE DONE 2208 begins the count of the number of output pulses.
  • the Enable signal 224 goes low causing the registers 2216 and 2217 to stop outputting the Partial Waveshapes.
  • the 16-bit waveshape of the "A" phase 2202 is converted to 1 serial bit in two TXCLKX22246 cycles.
  • the 16-bit waveshape of the "B" phase 2203 is also converted to 1 serial bit in two TXCLKX2 2246 cycles.
  • Pulse inversion may be achieved by swapping the "A" and "B" phases before the signals are sent to the parallel-to-serial circuits. The signal swap occurs if the Pulse Inversion signal 2258 is enabled on the Pulse Inversion MUX circuit 2254 and 2256.
  • the 8:1 parallel-to-serial circuit with double data rate (DDR) output is clocked with TXCLKX8 2266 which is at a frequency of 8 times the transmit clock. With DDR output, the waveshape is shifted out at a rate of 16 times the transmit clock frequency.
  • the signals from 8:1 parallel-to-serial circuit 2212 or 8 : 1 parallel-to-serial circuit 2213 are transferred out of the FPGA using the LVDS standard before it is re- synchronized by clock TXCLKXl 6 2236.
  • the "A” phase signal is re-synchronized by a low jitter positive emitter coupled logic (PECL) flip-flop 2234 and a low jitter clock, TXCLKX16 2236, at 16 times the transmit frequency. This can eliminate jitter added by the circuit inside the FPGA.
  • the "B” phase signal is also re-synchronized by flip-flop 2235.
  • Both the "A” and “B” signals go to respective driver circuits 2238, 2240 to increase their current drive capability.
  • the output of the drivers become signals TXB 2004 and TXA 2002 and connect to the transmit multiplexers in the front end circuit 2000.
  • Re-sending of the waveshape data 2202 and 2203 continues until the pulse number counter 2242 has reached the number specified by the pulse count input variable 2204 and the enable signal 2244 changes state.
  • Waveshape_A 2202 may change from one transmit cycle to the next.
  • Waveshape_B 2203. This allows for the generation of transmit waveforms with arbitrarily specified pulse widths from one cycle to the next.
  • Waveshape_A 2202 and Waveshape_B 2203 are specified independently. For example, either odd or even transmit waveforms may be generated.
  • FIGS. 22A-22C illustrate how the waveshape data can be used to change the fine delay, pulse width and dead time for the "A" and "B” signals.
  • the "B" output is identical to the "A” output except it is delayed by 1 A of the transmit frequency period.
  • Figure 22C illustrates that arbitrary wavefo ⁇ ns can be generated in the "A" phase and the "B” phase.
  • Ny Waveshape_A may be different from the one preceding it, and any Waveshape_B may be different from one preceding it.
  • the 16 bit waveforms used for Waveshape_Al(15:0), Waveshape_A2(15:0)and Waveshape_A3(15:0) are different from one another.
  • the Waveshape_B(l 5 :0) is repeated twice, but it would be possible to specify that a Waveshape_B be different from the preceding Waveshape_B.
  • the A and B waveforms are independent and can be used to implement transmit waveforms used for coded excitation methods, for example in applications involving contrast agent imaging and non-linear imaging.
  • the TXPower signal (shown as "TX High Voltage” in FIG. 18b) can control the amplitude of the output of the transmit pulser. As shown in this implementation, TXPower is common to all transmit channels. Optionally, the amplitude of the output pulse of each transmit channel can be controlled individually.
  • Fig. 19 is a system signal processing block diagram illustrating an exemplary beamformer control board 1900.
  • the beamformer control board 1900 is an exemplary embodiment of the beamformer control and signal processing block 1716.
  • the design and operation of the beamformer control board 1900 is generally known to one of ordinary skill in the art.
  • Embodiments of the exemplarily system can have the capability to acquire, process and display physiological signal sources 1901 of one or more of, for example, ECG, respiration, body temperature of the subject, or blood pressure.
  • the physiological signal acquisition block 1902 can contain signal acquisition modules that can acquire those types of physiological signals.
  • FIG. 20 is an exemplary schematic 2000 of the front end circuit transformer 1702, transmit output stage 1703 and the receive MUX 1704 and the transmit MUX 1708.
  • Other exemplary front end circuits can also be used with the described system.
  • front end circuits as described in U.S. Patent No. 6,083,164, entitled “Ultrasound Front-End Circuit Combining the Transmitter and Automatic Transmit/Receive Switch,” which is fully incorporated herein by reference and made a part hereof, can be used.
  • the transmit output stage receives a transmit waveform from the transmit pulse generator 1813 and in turn combines the transmit pulse information with transmit high voltage to create a high voltage waveform at an element which is part of the active transmit aperture.
  • transmit pulsing is effected by Dl 2010, D22012, QTDP 2008, QTDN 2006, QTXMUXP 2014, QTXMUXN 2016 and Tl 2018.
  • the transmit output stage which is included in the active transmit aperture is connected by turning on QTXMUXP 2014 and QTXMUXN 2016 to allow the gate drive signals, TXA 2002 and TXB 2004, to reach QTDN 2006 and QTDP 2008.
  • either QTDN 2006 or QTDP 2008 are turned on separately, with timing as required to produce the intended transmit waveform.
  • the pulser output appears on the left end of the transformer secondary, LTXS 2038, while the right end is clamped near 0 V by D 1 2010 and D22012, which can be, for example, ordinary fast silicon switching diodes.
  • D 1 2010 and D22012 which can be, for example, ordinary fast silicon switching diodes.
  • the receive multiplexing switch SWl 2020 can also be turned off to provide additional isolation.
  • the amplitude of the output of the transmit pulser is determined by the transmit supply voltage applied to the center tap of the primary of Tl 2018 through Rl 2022. Two voltage supplies are available, Vl 2024 and V2 2026, where Vl 2024 is larger than V22026. They are connected to a common node at the Rl 2022 as shown through FET switches QLSH 2028, QLSL 2030 and diode D3 2032.
  • One or the other of the supply voltages is selected by turning on either QLSH 2028 or QLSL 2030 using control signals Vl NE 2034 and V2 NE 2036.
  • Diode D3 2032 helps prevent current from flowing from Vl 2024 to V22026 when Vl 2024 is connected to Rl 2022. This configuration allows for rapid switching of the transmit supply voltage between two levels, since it avoids the requirement to charge or discharge the supply voltage as held on voltage storage capacitors C4 and C5 .
  • SWl 2020 is a receive multiplexing switch which can be a single pole single throw (SPST) or a single pole double throw (SPDT) switch of a type such as a GaAs PHEMT (gallium arsenide pseudomorphic high electron mobility transistor). Alternatively, the receive multiplexing switch may be implemented with other types of field effect transistors or bipolar transistors. If SWl 2020 is a SPDT switch it is configured as shown in FIG. 20, where one terminal is connected to a terminating resistor and the other is connected to the receive channel input. If SWl 2020 is a SPST switch, the terminal connected to the terminating resistor and the terminating resistor is deleted.
  • SPST single pole single throw
  • SPDT single pole double throw
  • the receive multiplexing switch is configured such that there is a connection between the array element and the receive channel.
  • the pulser drive MOSFETs, QTDN 2006 and QTDP 2008 are both turned on during receive, while QLSH 2028, QLSL 2030, QTXMUXN 2016 and QTXMUXP 2014 are held off.
  • LTXS 2038 For received signals too small to forward bias Dl 2010 or D2 2012, these diodes present high shunt impedance, dominated by their junction capacitance.
  • Ll 2040 and the leakage inductance LTXS 2038 are used to level the receive mode input impedance, compensating for the junction capacitance of D 1 2010, D2 2012 and the capacitance of the ganged switches forming the receive multiplexer
  • signal RXCLMP is eliminated and its' function performed by TXA and TXB.
  • the transmit function on this circuit is identical to the circuit of FIG.20 with QTXMUXN and QTXMUXP gating signals TxDriveN and TXDriveP .
  • QTXMUXN and QTXMUXP are off thus blocking signals TXA and TXB.
  • Resistors R8 and R9 shunt QTXMUXN and QTXMUXP so that when TXA and TXB are driven high for the duration of receive mode the voltage on the gates of QTDN and QTDP increases slowly resulting in gentle activation of these MOSFET switches.
  • the gentle activation of QTDN and QTDP for receive mode is controlled by signal RXCLMP in the circuit of FIG.20.
  • resistors R5 and R6 pull the voltage on the gates of QTDN and QTDP to ground when transmit multiplexing switches are turned off after a transmit operation.
  • the pulser employs a center-tapped transformer and NMOS FETs, together with a switch- selectable level supply, to generate nominally square pulses.
  • series and shunt resistances In order to control the delivered spectrum when connected to the transducer element thru a controlled impedance coax cable, it employs series and shunt resistances. These serve to reduce the time-variation of source impedance during operation of the pulser and provide back termination of the transducer during the interval immediately following transmit pulses.
  • This circuit (which is on the far side of a multiplexer as describe below), may be either a discrete switching MOSFET pulse amplifier or a collection of CMOS buffers sufficient to provide the required drive.
  • the transformer needed for the pulser is built as windings printed on the PCB augmented by small ferrite slabs fastened onto both sides of the PCB, around the windings. This technique is amenable to automated assembly provided the ferrite slabs can be packaged appropriately.
  • FIG.23 is a block diagram showing exemplary system according to an embodiment of the present invention.
  • the exemplary system 2300 is interfaced with a linear array 2302 having, for example, up to 256 elements.
  • a bundle of micro coax cables 2304 provides transmission of the signals between the array 2302 and the processing unit 2306.
  • the exemplary system further comprises a processing unit.
  • the processing unit 2306 is partitioned into two major subsystems.
  • the first is the front end 2308, which includes the beamformer, the front end electronics, the beamformer controller and the signal processing module.
  • the second is the computer unit 2310, or back end.
  • the front end subsystem 2308 is concerned with transmit signal generation, receive signal acquisition, and signal processing.
  • the back end 2310 which can be an off-the-shelf PC motherboard, is concerned with system control, signal and image processing, image display, data management, and the user interface. Data can be transferred between the front and back end sub-systems by, for example, a PCI express bus, as is known in the art to one of ordinary skill.
  • the module which processes the receive signals is the receive beamformer, as previously described herein.
  • the subsystem which generates the transmit signals is the transmit beamformer, also as previously described herein.
  • Each channel of the transmit and receive beamformers is connected to a separate element in the array 2302.
  • the beamformer is able to adjust the focal depth, aperture size and aperture window as a function of depth.
  • the exemplary system of FIG. 23 may support one or more various modes of ultrasound operation as are known in the art to one of ordinary skill. These modes are listed in Table 2, below:
  • Exemplary specifications of the system shown in FIG. 23 may include, for example, those specifications listed in Tables 3, below: Table 3
  • the system or portions thereof may be housed in a portable configuration such as, for example, a cart, including beamformer electronics 2316, a computer unit 2310, and a power supply unit 2312.
  • the user interface includes an integrated keyboard 2318 with custom controls, trackball, monitor, speakers, and DVD drive.
  • the front panel 2320 of the cart has connectors 2322 for connecting an array-based transducer 2302 and mouse physiological information such as ECG, blood pressure, and temperature.
  • the rear peripheral panel 2314 of the cart allows 'the connection of various peripheral devices such as remote monitor, footswitch, and network 2324.
  • the cart has a system of cooling fans 2326, air guides, and air vents to control the heat of the various electronics.
  • the computer unit 2310 may be an off-the-shelf Intel architecture processor running an operating system such as, for example, Microsoft Windows XP.
  • the computer unit 2310 may be comprised of, for example, an Intel 3 GHz CPU (Xeon Dual Processor or P4 with Hyperthreading); 2 GB DDR memory; PCI Express x4 with cable connector; 100 Mbps Ethernet; USB 2.0; Graphics controller capable of 1024x768x32bpp @ 100 Hz; Audio output (stereo); 2x 120 GB 7200 RPM Hard disk drives (one for O/S + software; one for user data) and 300W ATX power supply with power-factor correction
  • the power supply unit 2312 may be comprised of the following: a universal AC line input (100, 120, 220-240 VAC, 50 or 60 Hz), where the AC input is provided by a detachable cable that connects to a system AC input terminal block and has AC distribution using IEC terminal blocks.
  • the system cart of FIG. 23, and other embodiments of the invention is further comprised of system cabling 2328.
  • System cabling 2328 includes a main AC line cord; AC cordage for line filter, circuit breaker, power supply unit; AC cordage inside the power supply unit 2312; a computer unit 2310 power supply cord; monitor power supply cord; DVD drive power supply cord, a fan tray 2326 power supply cord and other power cordages as used in the embodiments according to the invention.
  • System cabling 2328 further comprises instrument electronics cables, which include instrument electronics sub-rack power cable; PCI Express cable; transducer connector cable; mouse information system (MIS) cable; 3D stage cable; standby switch cable; etc.
  • MIS mouse information system
  • System cabling 2328 further comprises computer cables, which may include video extension cable(s) (VGA, DVI, SVideo, etc.); keyboard/mouse extension cable(s); keyboard splitter; mouse splitter; remote mouse cable; remote keypad cable; remote video cable; USB extension cable(s); Ethernet extension cable; printer extension cable; speaker extension cable, etc. Cooling
  • video extension cable(s) VGA, DVI, SVideo, etc.
  • keyboard/mouse extension cable(s) keyboard splitter; mouse splitter; remote mouse cable; remote keypad cable; remote video cable; USB extension cable(s); Ethernet extension cable; printer extension cable; speaker extension cable, etc. Cooling
  • Filtered ambient air is provided through the use of fans 2326 to the system cart electronics which include, for example, the beamformer electronics (i.e., the beamformer card cage 2316, power supply unit 2312, and computer unit 2310.
  • the cooling system supports, for example, in one embodiment an ambient operating temperature range of +10 to +35°C, and the exhaust air temperature is kept below 20° C above the ambient air temperature, though other ambient operating ranges are contemplated within the scope of this invention.
  • the exemplary system is provided with a contiguous EMI shield in order to prevent external electromagnetic energy from interfering with the system operation, and to prevent electromagnetic energy generated by the system from emanating from the system.
  • the system shielding extends to the transducer cable 2304 and the array 2302, and the transducer connector 2322.
  • the computer 2310 and power supply units 2312 may be housed in separate shielded enclosures within the system. All shields are maintained at approximately ground potential, with very low impedance between them. There is a substantially direct connection between the chassis ground of the system and earth ground. Also, in one embodiment the AC supply may be isolated from the system power supply by an isolation transformer as part of the power supply unit 2312.
  • the exemplary system comprises of a power supply unit 2402, instrument electronics subrack, and computer unit.
  • the power supply unit 2402 distributes both AC and DC power throughout the cart.
  • a DC voltage of, for example, 48V is supplied to the instrument electronics subrack though other voltages are contemplated within the scope of this invention.
  • the instrument electronics subrack houses a beamformer control board 2404, four identical channel boards 2406, and a backplane 2408.
  • the boards 2406 mate with the backplane 2408 via, for example, blind mate connectors.
  • the instrument electronics communicate with the computer unit via, for example, a PCI express connection 2410.
  • Channel board Exemplary channel boards are shown, and have been previously described, in reference to
  • FIGS . 18a- 18d The channel boards 2406 generate the transmit signals with the proper timing for transmit beamforming, and acquiring, digitizing and beamforming the receive signals.
  • there are four channels boards 2406 each containing 16 transmit channels and 16 receive channels
  • Each channel board 2406 also contains 64 front end circuits, including transmit output stages, power supply circuitry, an FPGA for the transmit beamformer, an FPGA to provide the partial sum of the receive beamformer, the beamformer bus and connections to the backplane.
  • each front end circuit is multiplexed to each transmit and receive channel.
  • the transmit channels and transmit output stages generate bipolar pulses at a specified frequency ranging from about 15 to about 55 MHz, with a specified cycle count and amplitude.
  • the transmit waveforms generated by each channel have a specific delay relative to the other channels with a resolution equal to approximately 1/16 of the period of the transmit frequency.
  • the delay profile across the active transmit aperture is controlled by the transmit beamformer controller.
  • a low jitter master clock is used to generate the transmit burst signals.
  • the transmit output stage includes a means of adjusting the peak to peak voltage on a per channel basis, in order to create an apodized transmit aperture.
  • the receive channels provide variable gain adjustment, filtering and digitization of the receive signals, and receive beamforming.
  • the gain is implemented with a variable gain amplifier which also acts as the preamplifier. Gain is varied throughout the acquisition of the ultrasound line according to a predetermined gain profile known as the TGC curve.
  • Anti-aliasing filters precede the ADC (analog-to-digital converter) to prevent aliasing and to limit the noise bandwidth.
  • dual ADCs 1807, 1808 are used for each channel, since the signal is acquired as a quadrature signal.
  • the ADC clocks are phased 90° relative to one another.
  • the sampling frequency is set according to the center frequency of the array being used.
  • the 10 bit output of the ADCs is sent to a dual port RAM.
  • the receive beamformer reads the quadrature samples and carries out interpolation filtering according to the dynamic receive focusing scheme which is controlled by the receive beamformer controller. After interpolation filtering, the outputs from each receive channel are summed and then sent to the CPU via the high speed data transfer bus.
  • the receive beamformer is setup via the RX Control Bus.
  • the transmit beamformer is setup via the TX Control Bus.
  • the control parameters are updated before the start of each ultrasound line.
  • the control parameters are TX aperture, TX delay profile (coarse and fine delay), RX aperture, RX delay profile (initial, coarse and fine delay), RX phase, and RX apodization.
  • SOL start-of-line
  • each output stage consists of two MOSFETs driving a center tapped transformer, with the supply voltage at the center tap controlling the pulse amplitude.
  • the output waveform is approximately a square pulse with a variable number of cycles.
  • One end of the secondary of the transformer leads to the array element, the other to the receive protection circuit.
  • Reactive impedance elements provide impedance matching and filtering.
  • a FET switch in series with the gate of each MOSFET provides the multiplexing.
  • the transformer and inductors are implemented, for example, as traces on the printed circuit board. There is a ferrite core for the transformer which is inserted into an opening in the board.
  • Each transmit channel is multiplexed to four output stages as can be seen in FIG. 18. There are two transmit signals per channel, one to drive each phase of the push-pull output stage.
  • the analog section of the transmit channels consist of a push- pull type driver circuit capable of driving the gate capacitance of the output stage MOSFETs with the appropriate rise and fall times. These are multiplexed to the output stages by analog switches.
  • the transmit beamformer uses DDR memory to produce transmit waveforms clocked at a maximum of approximately 800 MHz. Each channel uses a separate DDR memory output.
  • the output clock rate is about 16X the center frequency (fc), thereby providing capability for the appropriate delay resolution. Jitter is reduced by re-clocking the DDR output with PECL.
  • transmit waveshaping can be effected, by adjusting the width of the positive or negative half cycles. This capability can introduce "dead time" between the positive and negative half cycles to improve the shape of the output pulse.
  • each front end circuit comprises a front end transformer 1702, a transmit output stage 1703, transmit MUX 1708, a receive MUX 1704, a diode limiter, and components for receive filtering.
  • each receive channel comprises the circuit elements which are involved with the acquisition of the receive signal.
  • the receive multiplexer 1704 connects the 64 receive channels to the elements within the active aperture, which is a subset of up to 64 contiguous elements within the 256 element array.
  • the receive beamformer such as the one shown in FIG. 17, is a module which independently processes and sums the digital data acquired by each channel in the receive aperture. Its functions may include, for example: dynamic control of the receive aperture size, i.e., the number of channels used during the acquisition of each receive sample; dynamic control of receive apodization, i.e., the window applied to the receive aperture; dynamic receive focusing, i.e., up sampling of the receive signal and the adjustment of the delay applied to each receive channel during the acquisition of each sample, through the use of interpolation filters and variation of aperture position within the array.
  • dynamic control of the receive aperture size i.e., the number of channels used during the acquisition of each receive sample
  • dynamic control of receive apodization i.e., the window applied to the receive aperture
  • dynamic receive focusing i.e., up sampling of the receive signal and the adjustment of the delay applied to each receive channel during the acquisition of each sample, through the use of interpolation filters and variation of aperture position within the array.
  • each channel board 2406 there are four channel boards 2406, each containing 16 transmit and 16 receive channels, all plugging into a backplane.
  • Each channel board is assigned an address based on its position in the backplane to allow independent control of each board.
  • the beamformer control board 2404 of the exemplary system of FIG. 24 provides and uplink of data to the host CPU (back end) and centralized timing and control for the hardware electronics.
  • the link to host CPU is via a PCI express bus 2410, which allows a data bit rate of approximately 250MB/s in each direction per lane.
  • An x8 lane width PCI Express link provides a peak full-duplex bandwidth of approximately 4 GB/s.
  • the TX/RX controller 2412 provides master timing using start of frame and start of line synchronization signals to the transmit beamformer and receive beamformer. It sets up the beamformer parameters in memory via a custom local bus. All the low-jitter clock frequencies for beamforming are generated on the beamformer control board 2404.
  • the RF partial sum data from each channel board 2406 is summed 2414 together with synthetic aperture data 2416. Then the ray line data goes into a first-in-first-out (FIFO) memory 2418 where it sits temporarily before being copied to the RF Cine buffer 2420.
  • the RF Cine buffer 2420 stores full frames of RF data and is randomly accessible. Data is read from the RF Cine buffer 2020 and copied to the host CPU via the PCI Express link 2410. Alternatively, the data can be processed by the signal processor module 2422 before being sent to the main computer unit. The data is then buffered, processed further and displayed by the application software and application user interface that runs on the main computer unit.
  • FIG. 19 previously referenced herein, is a block diagram of an embodiment of a beamformer control board 1900.
  • TX/RX Controller Transmit beamformer control:
  • the transmit (TX) beamformer control updates the transmit beamformer parameters each transmit line.
  • the parameters include number of coarse delay cycles at the transmit center frequency (fc), number of fine delay cycles (at 16 x fc), transmit waveshape (at 16 x fc), number of transmit cycles, transmit select, and transmit voltage.
  • the transmit beamformer control also schedules the updating of parameters for duplex mode, triplex mode, or multiple focal zones.
  • Receive beamformer control controls the receive delay profile, aperture size and apodization for each channel.
  • the delay control consists of coarse and fine delays, which are controlled by the dual port RAM read pointer and the interpolation filter coefficient selector bit, respectively.
  • the aperture control signal controls the aperture size dynamically by specifying when the output of each channel becomes active. This is done by controlling the clear signal of the final output register of the interpolation filters. Dynamic receive apodization is controlled by five bits of apodization data with which the signal in each channel is multiplied. The receive control signals are read out from a control RAM at the input sample clock rate as shown in FIG. 26.
  • Transmit/Receive Synchronization A block diagram of transmit/receive synchronization is shown in FIG.27.
  • B-Mode and M-Mode imaging different transmit and receive frequencies can be used.
  • line-to- line timing differences (jitter) between the transmit cycle and receive cycle may be introduced because the clocks are asynchronous.
  • a method to synchronize the transmit and receive clocks is to use a programmable divider (TX_Divider) 2714 to generate the receive clock (RXCLK_B) from the transmit clock (TXCLKxI 6) as shown in the embodiment of FIG. 27.
  • the receive frequency is a fixed ratio of the transmit frequency.
  • the ratio is transmit clock frequency times 16 divided by N, where N is an integer.
  • the TX_Divider 2714 is set to divide by 18. Due to the nature of the divider, RXCLK-B is in good phase alignment with TXCLKx 16, and the two clocks always have a minimum phase difference.
  • RXCLK_B is used to synchronize the start of line trigger (SOL) 2702.
  • SOL_S start of line trigger
  • the synchronized start of line trigger (SOL_S) 2704 generates TXJTRG.
  • TXJTRG is synchronized to TXCLKx ⁇ by TX_TRG SYNC 2716. A delay between SOL_S and TXJTRG can be added if necessary.
  • TXJTRG signals the transmit beamformer to begin a transmit cycle.
  • RXGATE is synchronized to RXCLK_B and signals the receive beamformer to begin data acquisition.
  • a multiplier (RX PLL) 2718 provides the RXCLKx4 clock frequency that is needed by the I/Q Clock Generator 2720 to generate the I and Q clocks.
  • FIG. 27A illustrates an alternative method of maintaining consistent synchronization between the transmit cycle and receive cycle by delaying the start of line trigger (SOL) 2702 to a point when the phase difference between the transmit clock and the receive clock is at a known state.
  • the SOL trigger 2702 is synchronized by the TX_RX_S YNC pulse.
  • the TX_RX_S YNC pulse is generated by the TX_Sync Timer 2722.
  • the synchronized start of line trigger (SOLjS) 2704 can now start the control timing signals for the transmit beamformer 2706 and receive beamformer data acquisition.
  • TXJTRG is a delayed version of SOL_S that signals the transmit beamformer to begin a transmit cycle.
  • TXJTRG is synchronized to TXCLK.
  • the transmit beamformer 2706 generates the TXGATE multiplexer control signals and TXA/TXB transmit pulses to the front end module.
  • RXGATE signals the receive beamformer to start data acquisition.
  • RXGATE is synchronized to RXCLK 2710.
  • the phase difference between transmit 2708 and receive clocks 2710 is fixed as long as the TX_Sync_Period 2712 is calculated correctly.
  • the TX_Sync_Period 2712 is the minimum number of transmit clock cycles required to achieve synchronization. For example, if the transmit clock frequency is 30 MHz and the receive clock frequency is 25 MHz, TX_Sync_Period 2712 will be 6 cycles of the transmit clock.
  • the clock generator 2428 provides the appropriate clock frequencies for transmit and receive beamforming. It comprises a low-jitter master clock, a programmable divider, clock buffers and re-synchronization circuits.
  • the frequencies are: transmit frequency (fc) - 25 to 50
  • the fastest clock used in this exemplary embodiment can be 800 MHz (50 MHz x 16).
  • the PCI Express bridge 2426 connects the host CPU and the embedded CPU 2424 via a PCI bus 2410. This allows DMA transfers from the RF cine buffer 2420 to the host processor memory and vice versa.
  • PCI Express builds on the communication models of PCI and PCI-X buses. PCI Express uses the same memory mapped address space as PCI and PCI-X with single or burst read/write commands. However, PCI Express is a point-to-point serial interconnect using a switch to connect different devices, whereas PCI and PCI-X are parallel multi-drop buses. PCI Express can be used as a chip-to-chip or board-to-board communication link via card edge connectors or over a cable.
  • the bandwidth of the PCI Express link may be, for example: Uplink - 210 MB/s burst and 140 MB/s sustained rate for RF Data, MIS Data, and diagnostics; Downlink - 20 MB/s burst and ⁇ 1 MB/s sustained rate for writing control parameters.
  • the partially summed beamformer RF data from the channel boards 2416 is first processed in the synthetic aperture FPGA.
  • the processing comprises beamformer final summation, synthetic aperture and write to FIFO.
  • the RF cine buffer 2420 is, for example, a 1 GByte dual port RAM.
  • the RF cine buffer 2420 is a random access memory block that stores RF data organized in lines and frames. The data can be input and output at different rates to support asynchronous signal processing.
  • the data stream is made up of interleaved I and Q beamformed data.
  • the FIFO buffer provides storage of the beamformer data while the memory is being read by the CPU for the next display period.
  • buffer specifications may include, for example: Storage - 300 Full Size Frames (512 ray lines x 1024 samples/line x 32 bits I&Q data); Buffer Size - >629 M bytes; Input rate - 140Mbytes/sec; Output rate - 140 Mbytes/sec (RF Data Rate) 32 Mbytes/sec (Video Rate).
  • the described exemplary ultrasound system is capable of very high acquisition frame rates in some modes of operation, in the range of several hundred frames per second.
  • the display rates do not have to be equivalent to the acquisition rates.
  • the human eye has a limited response time, and acts as a low pass filter for rapid changes in motion. It has been demonstrated that frame rates above 30 fps have little benefit in adding to perceived motion information. For this reason, displayed ultrasound image information can be processed at a rate of 30 fps or lower, even when the acquisition rates are much higher.
  • a large RF buffer memory is used to store beamformer output data. An exemplary structure for buffering the beamformer output date is shown in FIG.28.
  • the memory buffer 2800 can hold many frames of RF data. For a depth of 512 wavelengths, the storage of a full line of 16 bit quadrature sampled RF uses 4K bytes (1024 I,Q samples * 32 bits/pair). With512 raylinesperframe, a IG byte memory buffer can then hold 5122D frames. To keep track of frames written to the buffer, the write controller maintains "first Frame” and "last Frame” pointers, which can be read by the signal processing task, and point respectively to the first frame in the buffer available for reading, and the last frame available for reading
  • the beamformer summation output is written by the Write Controller 2802 to the next available frame storage area, which is typically the storage area immediately following that pointed to by the "last frame” pointer.
  • the "first frame” and “last frame” pointers are updated so that the data is written to the correct address in the buffer.
  • the buffer When acquisition is stopped (freeze), the buffer then contains the last N frames, with the "first frame” pointer indicating the oldest frame in the buffer.
  • the signal processing module 2422 has access to the RF memory buffer 2420. It accesses one acquisition frame at a time, at the display frame rate to produce the displayed estimate data. While the system is scanning, a timer signals the signal processing module that a display frame is required. At that time, the signal processing module 2422 will check to see if a new acquisition RP frame is available, and if so, will read the data and begin processing it. If the acquisition rate is faster than the display rate, the acquisition frames will be decimated prior to processing and display. After the system has been put in freeze, the RF frames stored in the memory buffer can be processed at any desired rate, up to the original acquisition rate.
  • the beamformer control board 2404 comprises a signal processor 2422 in the datapath to reduce the data load and/or computation load on the host CPU.
  • the processor 2422 may be, for example, a FPGA with a sufficient number of multipliers and memory, or a CPU such as, for example, a 970 PPC or a general purpose DSP.
  • the signal processing functions performed are divided between the signal processing module 2422 on the beamformer control board 2404, and the main computer unit (i.e., host computer). They include post-beamforming gain control, B- Mode amplitude detection and log compression, PW Doppler spectral estimation, color flow clutter filter and frequency/power estimation, asynchronous signal processing or frame averaging. Factors that may be considered in deciding where the processing takes place include the processing speed required, the complexity of the process, and the data transfer rates required.
  • the signal processing module 2422 performs processes which may include line interpolation, detection and compression.
  • Doppler color flow imaging is combined with B-Mode imaging such that the common blocks of the B-Mode signal path and the Doppler color flow signal path are time multiplexed to provide both types of processing.
  • the B-Mode lines are acquired in between the CFI ensembles at rate of 1 or 2 lines for each ensemble, depending on the relative ray line densities of B-Mode and CFI (typical CFI images use half the ray line density of B-Mode), as is known to one of ordinary skill in the art.
  • the Signal Processing Module 2422 performs processes that may include: ensemble buffering; clutter filter; velocity estimate calculation; power estimate calculation and variance estimate calculation.
  • the various parameters of the Doppler signal are estimated by a Doppler frequency and power estimator in either the host computer or the CPU 2424 on the beamformer control board.
  • the parameters estimated for each sample depth in the ensemble may include Doppler frequency, Doppler power, and the variance of the frequency estimates. These parameters may be used in a decision matrix to determine the probability that the frequency estimate is a true estimate of the Doppler spectrum, rather than a noise or clutter signal estimate.
  • Color flow velocity estimates are derived from the Doppler frequency estimates. AU of the estimates are derived using a 2-D autocorrelation method as is known to one of ordinary skill in the art.
  • Pulsed Doppler acquisition may be either by itself, in duplex mode, or in triplex mode.
  • duplex mode the PW Doppler transmit pulses are interleaved with the B-mode transmit pulses so that the B-mode image is updated in real time while the PW Doppler signal is acquired.
  • the method of interleaving depends on the Doppler PRP selected.
  • the components shared between B-Mode imaging and Pulsed Doppler processing are time multiplexed to accomplish both types of processing.
  • triplex mode Pulse Doppler is combined with B-Mode and color flow imaging.
  • the simplest implementation of triplex mode is a time interleaving of either a B-Mode line or a CFI line, in a fixed sequence that eventually results in a full frame of B-Mode and CFI image lines.
  • the PRFs for both Pulsed Doppler and CFI are reduced by half, compared with their normal single modes of operation.
  • the I and Q samples for each ray line are range gated (a selected range of I or Q signals are separated out from the full range available and averaged to produce a single I 5 Q pair), to select the region of interest for the Doppler sample volume.
  • the length of the range gate can be varied, if desired, by the user to cover a range of depth.
  • the resulting averaged I,Q pairs are sent to a spectral processor, as well as an audio processor, which converts the I, Q Doppler frequency data to two audio output streams, one for flow towards the transducer (forward) and the other for flow away from the transducer (reverse).
  • the signal processing module 2422 performs processes including range gating (digital integration).
  • the signal processing module 2422 performs processes including detection and compression.
  • EKV is an acquisition method in which extremely high frame rate images are generated (1000 frames per second and higher) as a post processing operation using ECG (electro- cardiograph) signals used as timing events.
  • EKV imaging may be implemented with either a single element mechanically scanned transducer, or with a transducer array.
  • EKV imaging involves the acquisition of ultrasound lines at a PRF of 1000 Hz or higher at each line position in the 2-D image over a time period.
  • the time period over which ultrasound lines are acquired at each line position referred to as the EKV Time Period, can be for example, 1 second, which is long enough to capture several cardiac cycles in a mouse or other small animal.
  • the acquisition of each ultrasound line involves the firing of a single transmit pulse followed by acquisition of the returning ultrasound data.
  • each frame of the EKV image is reconstructed by assembling the ultrasound lines which were acquired at the same time during the cardiac cycle.
  • the sequence of acquisition of the EKV data set may be such that the ultrasound line position remains static while the ultrasound lines are acquired over the time period. For example, if the time period is 1 second, and the PRF is 1 kHz, 1000 ultrasound lines will be acquired at the first ultrasound line position. The line position can then be incremented, and the process repeated. In this way all EKV data for all 250 lines in the 2-D image will be acquired.
  • the method of interleaving allows for a reduction in the length of time required to complete the EKV data set.
  • the PRF is 1 kHz there is a 1 ms time period between pulses during which other lines can be acquired.
  • the number of ultrasound lines which can be acquired is determined by the two-way transit time of ultrasound to the maximum depth in tissue from which signals are detected. For example, if the two-way transit time is 20 ⁇ sec, 50 ultrasound lines at different line positions may be interleaved during the PRF interval. If we label the line positions Ll, L2 ... L50, one exemplary interleaving method can be implemented as follows:
  • the sequence in the above table is repeated until the EKV Time Period has elapsed, at which time there will be a block of data consisting of 1000 ultrasound lines acquired at 50 different line positions, from line 1 to line 50.
  • the acquisition of the block of data is then repeated in this manner for the next 50 lines in the 2-D image, line 51 to line 100, followed by acquisition over lines 101 to 150, etc., until the full 250 line data set is complete.
  • the total time required for the complete data set over 250 lines is reduced by a factor equal to the number of lines interleaved, which in this example is 50. Therefore the total length of time required would be 5 seconds.
  • the embedded CPU 2422 on the beamformer control board 2404 is, in one embodiment, a 32-bit embedded microprocessor with a PCI interface 2426 and a DDR memory interface.
  • the main function of the embedded CPU 2424 is data traffic control. It controls data flow from the receive beamformer FIFO 2418 to the KF cine buffer 2420, from the RF cine buffer 2420 to the signal processing module 2422, and from the signal processing module 2422 to the host PC.
  • the beamformer control and diagnostics information is memory mapped on the target PCI device as registers.
  • the embedded CPU 2424 decodes the location of the registers and relays the information over the appropriate local bus.
  • the local bus can be, for example, PCI, custom parallel (using GPIO), I2C serial, or UART serial, as each are known in the art.
  • the physiological acquisition system 2430 filters and converts analog signals from the mouse information system inputs 2438. These signals may include subject ECG, temperature, respiration, and blood pressure. After data conversion, the data is transferred to the embedded CPU 2424 memory via local bus, and then on to the host CPU for display via the PCI Express link 2410.
  • the beamformer control board 2404 monitors the rack power supply 2432 and lower voltages generated on each board.
  • the rack power supply 2432 may provide +48VDC to the backplane 2408.
  • two high voltage post regulators 2436 on each channel board 2406 supply the transmit portion of the front end circuit. The beamformer control board 2404 monitors these regulators for over-current or over- voltage situations
  • the backplane 2408 mounts to the instrument electronics card cage. In one embodiment it has blind mate edge connectors to allow each of the boards to plug in, though other connection schemes are contemplated within the scope of this invention. It provides interconnection between boards, and input/output connectors for signals outside the card cage. In one embodiment, the size of the backplane is 8U high by 84HP wide so that it may fit in a 8Ux 19" rackmount VME-style card cage. The card cage depth may be 280mm in one embodiment.
  • system software 2330 operates on a processor platform such as, for example, an Intel processor platform running Windows XP operating system.
  • the processor platform of one embodiment of the system is provided by the computer unit 2310, previously described herein.
  • the system software 2330 may be loaded on a standalone workstation for reviewing studies.
  • the workstation does not contain beamformer hardware nor does it have a transducer for acquisition of new data. It can review previously acquired study data and perform a limited set of processing functions. For example, the user may add measurements, playback at different frame rates, or change the color map.
  • FIG.30 is an embodiment of a main software application that may be used to practice one or more aspects of the present invention.
  • the system software 3000 as shown in FIG. 30, may be loaded when the system powers-up and can provide an interface for an operator of the system.
  • the framework 3018 which determines the overall structure of the components can be used to produce an application executable by the operating system of the processing platform of the computer unit 2310 and to interface with the operating system.
  • the framework can be used to produce an application executable by the operating system of the processing platform of the computer unit 2310 and to interface with the operating system.
  • the framework can be used to produce an application executable by the operating system of the processing platform of the computer unit 2310 and to interface with the operating system.
  • the framework 3018 which determines the overall structure of the components can be used to produce an application executable by the operating system of the processing platform of the computer unit 2310 and to interface with the operating system.
  • the framework can be used to produce an application executable by the operating system of the processing platform of the computer unit 2310 and to interface with the operating system.
  • 3018 may produce a Windows application and interface with the Windows operating system.
  • the application controller 3020 software component can be the state machine for the system software 3000. It may control the interaction between the operator, the system software 3000 and the front end 2308.
  • the application view 3022 software component can provide a foundation to support the presentation of the system software 3000 based on the state machine in the application controller 3020 software component as previously described herein.
  • the studies component 3002 may allow the operator to perform studies, review study data, edit content, and import or export study data.
  • modes 3004 software component of the system software 3000 there can be various operating modes supported by the system for acquiring data and can be managed by a modes 3004 software component of the system software 3000.
  • the supported modes may include, for example, B-Mode, 3D Mode, M-Mode, PW-Doppler, Color Flow Doppler, etc.
  • Each mode has adjustable parameters, cine loop acquisition, and main image display area, which may be managed by the modes 3004 software component.
  • Some modes may operate simultaneously, e.g., PW-Doppler and B-Mode.
  • the beamformer control 3024 software component can generate the imaging parameters for the front end based on the settings in the system software 3000.
  • the user data manager 3006 software component may maintain user preferences regarding how the system is configured.
  • the measurements 3026 software component may allow the operator to make measurements and annotations on the mode data!
  • the calculations 3028 software component may allow the operator to perform calculation on measurements.
  • the utilities layer 3008 software component contains common utilities that are used throughout the application as well as third party libraries.
  • the hardware layer 3012 software component is used to communicate to the beamformer hardware via the PCI Express bus, as previously described herein.
  • the physiological 3030 software component can be used to control the physiological data collection through the hardware layer 3012 as previously described herein.
  • the data layer 3010 may contain a database of all the different sets of parameters required for operation. The parameters may be set depending on the current user configuration and mode of operation.
  • the message log 3014 and engineering configuration 3016 may be used for diagnostic reporting and troubleshooting.
  • the system can have one transducer connector 2438 on the front of the cart and the user can physically unplug the first transducer and then plug in another when switching transducers.
  • this may be a 360-pin transducer connector 2438.
  • a transducer select board with two transducer connectors at the front panel can also be used and enables switching between transducers without physically handling the transducers.
  • Another exemplary embodiment of the high frequency ultrasound imaging system comprises a modular, software-based architecture described below and as shown in FIG. 31.
  • the embodiment of FIG.31 comprises four modules, which are part of a processing unit, for the exemplary system; a beamformer module 3102; an RF buffer memory 3104; a signal processing module 3106; and a system CPU 3108.
  • the beamformer module 3102 comprises the circuitry for transmitting and receiving pulses from the transducer, as well as the delay processing used for beamforming. Its output can be summed RF data or optionally down-converted I and Q data from quadrature sampling techniques. The output of the beamformer module 3102 may be written to a large RF buffer memory 3104, as described herein.
  • the CPU/signal processing module 3106 is responsible for processing the RF data from the beamformer 3102 for image formation, or Doppler sensing.
  • the signal processing module 3106 can comprise a CPU module with the processing tasks implemented in software executing in a general purpose computing environment. Alternatively, the signal processing module 3106 can be implemented with some signal processing functions in hardware or in software executing on dedicated processors, in which case an additional signal processing module can be implemented as a plug-in card to the system CPU 3108.
  • the signal processing module 3106 can be implemented with high performance CPUs.
  • DSPs digital signal processing chips
  • One type of DSP which may be used is of the floating point variety, as are known in the art, and can be controlled by the host CPU, as well as being "data driven.”
  • the system CPU 3108 can act as both a user interface/control system as well as a signal/image processing sub-system. System control information can be distributed using memory mapped I/O, wherein modules interface to the peripheral bus of the CPU module.
  • the system CPU 3108 can be physically separate from the beamformer module 3102 and can be connected via a PCI Express cable (or equivalent) 3110.
  • An exemplary PCI Express cable 3110 is one that supports transfers up to 1 GB/sec and lengths of three meters.
  • the system CPU 3108 in the exemplary architecture can perform a number of real-time processing tasks, including signal processing, scan conversion, and display processing. These tasks can be managed in a manner that does not require a "hard" real-time operating system, allowing for expected system latencies in the scheduling of processing routines.
  • the system CPU 3108 can be responsible for managing the user interface to the system, and providing setup and control functions to the other modules in response to user actions.
  • the CPU motherboard and operating system can support multiple CPU's, with fast access to a high speed system bus, and near real-time task management.
  • the beamformer module 3102 of this exemplary system comprises a transmit beamformer.
  • the transmit beamformer can provide functions which may include, for example, aperture control through selection of a subset of array elements, delay timing to start of transmit pulse, transmit waveform generation, and transmit apodization control.
  • a transducer array 3112 is utilized.
  • this transducer array 3112 contains up to 256 elements.
  • the transmit beamformer component of the beamformer module 3102 may be comprised of a number of transmitters equivalent to the number of transducer array elements.
  • the transmit beamformer comprises 256 transmitters.
  • the transmit beamformer can comprise less than 256 transmitters and a high voltage switching method to connect an individual transmitter to a specific element. High voltage multiplexers are used to select a linear subset of elements from a 256 element array.
  • the transmit beamformer component of the beamformer module 3102 may comprise high voltage pulser drivers for all 256 elements of the exemplary array, and a switching mechanism which connects a subset of transmit waveform generators to the appropriate drivers/array elements.
  • This optional embodiment uses 256 TXJRX switches for protection of the receiver inputs with low level multiplexing to select the subset of array elements for the receive aperture.
  • the low level multiplexing can optionally be combined with the TX/RX switches and in some cases has less attenuation of the receive signals and faster switching, when compared with a high voltage MUX scheme.
  • Transmit delays of 1/16 wavelength can be used and provide appropriate focusing and side lobe reduction in the transmit beam.
  • the maximum delay times when measured in wavelengths, can be at least 0.7 times the largest transmit aperture.
  • the largest transmit aperture is 192 wavelengths.
  • the maximum transmit delay times can be at least 6.72 microseconds.
  • the highest center frequency of interest specifies the delay resolution. At 50 MHz, this gives a delay accuracy of 1.25nsec, which uses the equivalent of an 800 MHz clock and a 13 bit counter to achieve the maximum delay time of 6.72 usec.
  • a four phase clock at 200 MHz can be used instead of a high frequency clock. This would allow selecting a specific transmit delay by selecting one of the four phases of the 200 MHz clock as input to an 11 bit counter, which is preloaded with the number of 200 MHz clocks in the delay time.
  • the transmit beamformer component of the beamformer module 3102 further comprises a bi-polar transmit pulser.
  • This type of pulser drive is typically specified with three parameters:
  • Tl which is a transmit frequency (duration of half cycle); T2 which is a half cycle on time
  • the control of the half cycle pulse duration, T2 allows for closer approximation to a sine wave drive, with improved transducer output. It can also be used to obtain a somewhat crude apodization of the transmit pulse power, provided that sufficiently fine control of the duration is provided.
  • Transmit apodization can be used to reduce spurious lobes in the transmit beam, which can be either side lobes or grating lobes. Apodization of the transmit aperture results in reduced power output and worse lateral resolution, so it is not always desirable. Often a small amount of apodization capability, such as providing for only a few levels of power output, is sufficient to achieve a good compromise between spurious lobe reduction and lateral resolution.
  • the pulse width modulation scheme mentioned above for transmit waveform generation is one possible means of providing limited transmit apodization.
  • a second method is to provide not one, but possibly four or more levels of high voltage for the pulser drivers, with a means to select one of these levels on each pulser.
  • the beamformer module 3102 also comprises a receive beamformer component.
  • receive beamforming implementations There are several different receive beamforming implementations which can be used in the exemplary system.
  • the digital methods discussed below have least one A/D converter for each element in the receive aperture.
  • the A/D converter bit depth is 10 bits, which gives the desired beamforming accuracy at -5OdB signal levels.
  • the A/D dynamic range is chosen to reduce spurious lobes and thus provide contrast resolution as desired.
  • Eight bit A/D converters can be used if appropriate.
  • Embodiments of the exemplary system use 64 receive channels, combined using synthetic aperture to implement a 128 channel receive aperture for applications where maximum frame rate is not needed.
  • One optional method for digital receive beamforming implementation samples the ultrasound signals from the individual elements at a rate which is at least twice as high as the highest frequency in the signal (often called the Nyquist rate.) For example, a 50MHz, 100% bandwidth transducer the Nyquist sampling rate is 150MHz or higher.
  • Another optional sampling method for the receive beamformer component of the beamformer module 3102 is bandwidth sampling.
  • Sampling theory provides that if a continuous function only contains frequencies within a bandwidth, B Hertz, it is completely determined by its value at a series of points spaced less than 1/(2*B) seconds apart. Sampling a band-limited signal results in multiple copies of the signal spectrum appearing at a fixed relationship to the sampling spectrum. Provided these replicated spectra don't overlap, it is possible to reconstruct the original signal from the under-sampled data. For example, consider a signal with a maximum bandwidth of 20 MHz centered at 30 MHz, and sampled at a rate of 40 MHz. In this situation, the spectrum is replicated as shown in FIG.
  • FIG. 33 illustrates bandwidth sampling of 30 MHz signal spectrum, which may be utilized in an embodiment of the receive beamformer component of the beamformer module 3102.
  • S ampling the signal spectrum in FIG. 33 using normal Nyquist sampling requires a sample rate of 80 MHz or higher.
  • transducer center frequencies up to 60 MHz can be managed with 80 mega samples per second (MSPS) 10 bit A/D converters, which are known in the art and are available from several vendors.
  • MSPS mega samples per second
  • the signal spectrum had no frequency components outside of the 20MHz bandwidth region (66.7% of the center frequency).
  • a transducer spectrum often has skirts that can extend beyond the 66.7% bandwidth region, creating overlapping spectra and inaccurate signal reconstruction.
  • These skirts can be dealt with by using a bandpass anti-aliasing filter prior to the A/D converter that keeps the power in the spectral skirts extending beyond the bandwidth limits to a desired level, such as 5-10%.
  • Quadrature Sampling Another form of bandwidth sampling, known as quadrature sampling, can optionally be used in an embodiment of the receive beamformer component of the beamformer module 3102.
  • samples can be repeated at an interval which is consistent with the bandwidth of the signal. For example, if the quadrature samples are taken at every period of the center frequency, the sample rate supports a 100% bandwidth signal.
  • the sample pair resulting from quadrature sampling is not a true complimentary pair, since the samples are taken at different times, however they are true samples of the analytic waveforms, and concurrent quadrature samples can be found by interpolating the samples of the two I and Q sampled waveforms to the same point in time.
  • Quadrature sampling may be implemented with one high sample rate converter sampling at four times the center frequency or with two lower frequency converters each operating at the center frequency but with clocks differing in phase by 90° with respect to the center frequency
  • Nyquist Sampling is yet another form of sampling.
  • This form of sampling is Nyquist sampling combined with bandwidth sampling. Normal Nyquist sampling is used for the lower transducer center frequencies and bandwidth sampling for the higher frequencies. Commercially available 10 bit A/Ds with maximum sample rates of 105 MSPS are available. With this sample rate capability, a 30 MHz center frequency transducer with 100% bandwidth can be sampled adequately at Nyquist rates. At 40 MHz, Nyquist sampling can be used for transducers with bandwidths up to approximately 60%, so for this center frequency or higher, bandwidth sampling can be used. If these higher sample rates are used, the beamformer processing circuitry also accommodates the higher clock rates and increased storage requirements.
  • a variation of quadrature sampling can be used to provide a higher bandwidth beamforming capability for those applications that can benefit from it (for example, harmonic imaging)
  • two quadrature sample pairs may be acquired for every cycle of the center frequency. For example, consider the sampling of a signal which has a center frequency of 30 MHz and significant spectral content beyond 100% bandwidth, such that the spectrum extends to frequencies less than 15 MHz and/or greater than 45 MHz.
  • Two A/D converters per channel may be used to acquire the RF signal at that channel, each sampling periodically at twice the center frequency, 60 MHz.
  • the sample clock of the second A/D converter is delayed by 1 A the period of the 30 MHz center frequency relative to the sampling clock of the first A/D converter.
  • Every second sample acquired by the A/D converters will be multiplied by - 1.
  • the sample stream originating from the first A/D converter will then be the down-converted quadrature (Q) sample stream, and that originating from the second A/D converter becomes the down-converted in- phase (I) sample stream.
  • the fine delay required for receive beamforming maybe implemented by interpolation of the quadrature samples. This method allows for accurate sampling of the RF signal over 200% bandwidth of the center frequency.
  • the RF output of the beamformer can be formed using two acquisition pulses, similar to a synthetic aperture approach. For example, consider a 30 MHz signal spectrum with 100% bandwidth, so that the -6dB spectrum extends from 15 to 45 MHz. In this case, the signal can be sampled at a 60 MHz sample rate, and the sign of every other sample flipped, to provide a down-converted sample stream that can be taken as the Q channel of a quadrature down-conversion scheme. On the next acquisition, the sampling clock is delayed by 1/4 of the period of 30 MHz, providing (after flipping the sign of alternate samples) the I quadrature waveform.
  • receive beamformer delay implementation is performed using the interpolation method. In this approach to beamforming, the A/D converters all sample concurrently, at a constant sample rate (using bandwidth or quadrature sampling).
  • the delays for steering and dynamic focusing are implemented in two steps: 1) a coarse delay stage that implements a delay which is an integral number of sample clocks cycles, and 2) an interpolation filter that interpolates to 1 / 16 of a wavelength time positions in between the coarse samples.
  • the coarse delay stage performs the function of a programmable shift register, whose maximum length is equivalent to the maximum delay time desired in sample periods. The order of these two stages can be reversed if desired, depending on implementation considerations.
  • Bandwidth sampling interpolation may be described using the following example.
  • the sample rate on all A/D converters can be set to 40 MHz, providing a 66.7% bandwidth.
  • the programmable interpolators need only calculate one of eleven intermediate sample values (for 1/16 wavelength accuracy), equally spaced between adjacent 40 MHz samples.
  • the interpolators can be specifically designed for bandwidth sampling to provide for accurate signal reconstruction. Samples can be taken from the output of all channels' interpolators, and summed to produce the sampled RF waveform for the desired beamforming direction.
  • Every odd sample can be taken as samples of the Q component of the quadrature baseband representation of the signal (with alternate sign), while even samples can be considered to be samples of the I component.
  • a simple bandlimited interpolator can be used to find the I and Q signal values at the appropriate intermediate time point, which can then be combined to reconstruct the RF value.
  • all of the bandwidth sampled data points can be down-converted by the interpolation filters, resulting in a baseband quadrature sampled beamformer output, which can simplify downstream signal processing.
  • Quadrature sampling interpolation may be described using the following example. Ih this example, the input signals for each channel are assumed to be quadrature sampled, at one quadrature pair per cycle of the transducer center frequency, providing an input bandwidth of 100% around the center frequency. The two samples in the pair are taken at 90 degrees phase difference with respect to the center frequency, which provides actual samples of the Q and I baseband signals, but the waveforms are sampled at different points in time. Before the Q and I data can be combined, this sampling offset is corrected using interpolation filters. The interpolation required for correcting the sample offsets can optionally be incorporated into the interpolation filters used for beamforming.
  • the interpolation filters are operating on these signals, rather than the RF waveforms.
  • the samples for all channels are taken at the same time, which leads to I and Q waveforms with the same phase relative to an RF waveform common to all channels.
  • This is equivalent to using mixers on all channels to derive I or Q signals, where the carrier frequency for the mixers all have the same phase.
  • correct summation of the I and Q samples from different channels can be provided by adjusting the carrier phase on each channel to match the phase of the time delayed echo waveforms. This amounts to a phase rotation of the interpolated I, Q samples according to the interpolation point relative to 0 degrees phase of the RF center frequency period. This rotation can also be incorporated into coefficients of a FIR interpolation filter, to produce a corrected I and Q output from each channel that can be summed coherently.
  • the quadrature sampling interpolation beamforming method As way of explanation of the quadrature sampling interpolation beamforming method, one can first consider a simpler conceptual model, rather than an actual implementation. In this model, interpolation will be implemented to 16 separate points over the period of the center frequency , providing 1/16 wavelength accuracy for beamforming. This level of accuracy has been shown to be sufficient to provide no significant degradation of beam profiles.
  • the signal is a sine wave whose frequency is 0.9 times the frequency of sampling (which is, for example, 1 Hz in this instance).
  • the Q samples are shown as 'o's 3402, while the I samples are shown as 'x's 3404.
  • the Q and I samples are samples of much slower changing waveforms, which represent the baseband Q and I waveforms.
  • the interpolation filters operates on these waveforms, to compute 16 interpolation points per period of the sampling frequency.
  • FIG. 34 which shows a quadrature sampled sine wave at 0.9 times the sample frequency.
  • the interpolation points are chosen so that the actual sample values don't fall on an interpolation point. This provides that the filter function inherent in the interpolation filter is applied to all points.
  • the positions of the 16 interpolated points with respect to the Q and I sample points are shown in FIG. 34A.
  • a four point FIR filter is sufficient for accurate interpolation.
  • a window of eight samples can be used, as shown in FIG. 34B.
  • FIG. 34C is a plot of the interpolated values for the example sine wave given in FIG. 34 over the sine wave of FIG. 34.
  • the interpolated points are shown as the dotted lines and start after the fourth sample point, which is the first position that a window can be applied (in this case, window #2).
  • FIG. 36 is an illustration of a data set for the acquisition of a single ray line of echo information from a linear array, consisting of the quadrature sampled signals from each of the transducer elements over a depth range.
  • This data set can be viewed as an array with depth 3602 along one axis and channel number 3604 along the other.
  • an eight sample window is positioned in each channel's data row at the appropriate sample number, which corresponds to depth, and one of the 16 interpolation points is chosen which provides the exact delay required.
  • the various channel windows are positioned along a parabolic arc 3606, which corresponds to the curvature of focus needed to reconstruct the range point.
  • the beaniforming parameters for the range point are then defined by providing a starting sample number and interpolation number for each channel included in the aperture. After applying the appropriate interpolation filters to each of the channel data shown above, and I and Q sample is obtained for each channel that corresponds to the appropriate delay for the range point. As previously described herein, these I and Q sample pairs can not be simply summed to derived a beamformed I,Q pair, since the phase of the I 5 Q sample from each channel is different. Before the I,Q pairs from each channel can be summed, each channel's I,Q pair is phase rotated to correspond the same phase with respect to the delay time implemented.
  • the amount of rotation is calculated by determining the distance of the reconstruction point from the start of a sample period, which is effectively the interpolation point number times 1/16 wavelength (plus 1/32 of the period, to be precise). This distance can be converted into an angle by taking the fraction of the total period and multiplying by 2*pi.
  • the rotation equations are then given below:
  • the rotation of the I and Q samples can be incorporated into the 8 coefficients used for interpolation. For example, when using the first interpolation window, where the even samples are I samples, the sin(angle) in equation (1), above, can be multiplied by each of the I coefficients, and the cos(angle) term multiplied by each of the Q coefficients. The resulting FIR filter then provides the rotated Q value, when all product terms are added together. Similarly, another set of coefficients can be used to compute the rotated I value. In this scheme, the FIR filter operates twice per sample period, using different coefficients, to produce an output stream of rotated Q and I values. This stream can be summed with the stream of rotated Q and I values from other channels to produce the beamformer output, which in this case is interleaved
  • I,Q data representing the down-converted summed RF.
  • the interpolation of the Q and I values may be implemented with separate FIR filters, each with 4 coefficients.
  • the phase rotation is implemented in a stage following the interpolation.
  • the sampling scheme in which two quadrature pairs are acquired for each period of the center frequency also requires a phase rotation after interpolation of the quadrature samples.
  • two A/D converters per channel may be used to acqujre the RF signal at that channel, each sampling periodically at twice the center frequency.
  • the sample clock of the second A/D converter is delayed relative to the sampling clock of the first A/D converter by 1 A of the period of the center frequency. Every second sample acquired by the A/D converters will be multiplied by -1.
  • Interpolated values can be calculated for 16 separate points over the period of the center frequency, or for 8 points over the period of the sample clock.
  • the interpolation points calculated over a span of two sample clock periods may be numbered from 0 to 15.
  • the amount of phase rotation required is the interpolation point number multiplied by 2*pi /16. For example, when the interpolation point is located at 1/8 of a sample clock period after the start of odd numbered sample clock cycles, the amount of phase rotation will be 2*pi/l 6. When the interpolation point is located at 1/8 of a sample clock period after the start of even numbered sample clock cycles, the amount of phase rotation will be 2*pi*(9/16).
  • the interpolation points maybe shifted by 1/32 of the center frequency so that the actual sample values don't fall on an interpolation point on order to ensure that the filter function inherent in the interpolation filter is applied to all points. After the phase rotation, the values can be summed to provide the beamformer output.
  • the amplitude of the envelope of the received signal output from a quadrature beamformer may be determined by calculating the square root of the sum of the squares of the I and Q samples. A compression curve may then be applied to the envelope amplitude values. Doppler processing can use the summed I and Q sample stream directly to derive Doppler frequency estimates and/or compute FFT spectral data.
  • interpolation filters and control logic can be implemented with an FPGA device.
  • receive beamformer delay implementation may be performed using the interpolation method.
  • FIG.25 A high level diagram of a delay implementation is shown in FIG.25. This diagram shows the functions after A-to-D conversion for a single beamformer channel. The outputs of the two A/D converters are multiplexed into a single sample stream at a constant rate of two times the center frequency. For 10 bit A/D converters, we then have a series of 10 bit samples coming from the A/D converters, with the first sample designated as a Q sample, followed by the I sample of the quadrature pair. This stream is the input to the dual port ram 2502 shown in FIG. 25.
  • a write pointer 2504 and a read pointer 2506 in the dual port ram are reset to the top of the ram 2502.
  • the sample is written to the ram 2502 at the address of the write pointer 2504 , which is then advanced to the next sequential location.
  • the write pointer 2504 reaches the end of the ram 2502, it wraps around to the beginning of the ram 2504 for the next write operation.
  • the dual port rams 2502 are large enough to store samples for the maximum delay required by the steering and focusing needed for the acquisition line.
  • FIG.26 illustrates one mechanism for implementing the control signals required for a single channel.
  • a control ram's 2602 address is incremented at the input sample clock rate (2X the center frequency, Fc).
  • the ram 2602 then provides a registered output 2604 where each bit provides an independent control signal.
  • the receive delays are implemented in one embodiment according to the present invention.
  • the echo For echoes returning from a point located along the centerline of the receive aperture, the echo first appears in the signals from the element or elements closest to the center of the aperture, then later in the elements near the outer portion of the aperture.
  • the center signals can be delayed a period of time before they can be summed with the signals from the outer edge.
  • longer delays are achieved by letting the read point 2506 lag further back behind the write point 2504. Therefore, the center channel in the aperture will have the greatest difference between read point 2506 and write point 2504, while the outer channels will have the smallest difference.
  • the focal point is moved outward along the receive line at the half the speed of sound, so that focal point is always at the location of the echo being received.
  • the delay between the center and outer channels of the aperture decreases.
  • f focal length divided by the aperture size
  • the center channel is delayed by the maximum delay amount (the amount for the full aperture) by letting the write pointer 2504 move ahead of the read pointer 2506 until the delay is achieved. At that point, the read pointer 2506 is moved ahead at the same rate as the write pointer 2504.
  • An outer channel's initial delay is set by letting the write pointer 2504 move ahead of the read pointer 2506 by the appropriate amount. This initial delay offset can be less than the offset of the read 2506 and write pointers 2504 of the center channel. At this point, the read pointer 2506 is moved ahead at the same rate as the write pointer 2504 until the channel is made active in the aperture.
  • the above operation can be directed with only two binary state control signals as shown in FIG.26A.
  • the first signal is a read pointer advance enable (RPE) 2600, which allows the read pointer 2506 to advance concurrently with the write pointer 2504.
  • RPE read pointer advance enable
  • the write pointer is advanced after the data is written to the dual port ram 2502, and the read pointer 2506 is advanced at the same time.
  • the signal is false, the write pointer 2504 is advanced following a write operation, but the read pointer 2502 remains the same.
  • the RPE control signal 2606, 2606a is used not only to set the initial delay of a channel, but also to implement the dynamic focus coarse delays.
  • the second control signal (CE) 2608, 2608a merely specifies when the channel's output becomes active, so that it participates in the summation of all active channels. This can be accomplished by the CE signal 2608, 2608a controlling the 'clear' input of the final output register of the interpolators. A channel is made active in the aperture according to when its element sensitivity pattern allows it to receive the returning echoes with less than some threshold amount of attenuation. This time must be consistent with the initial delay time implemented by the first control signal. It should be noted that the CE signal 2608 specifies the time a channel becomes active in terms of the number of quadrature samples from the start of the acquisition line. This is because when a channel first participates in the sum of channels, it must contribute a quadrature pair. In the case of the Fc * 2 sample clock, there are two clocks for every quadrature sample pair.
  • FIG.26 illustrates the control signals as they might appear for a center element (2606 and 2608), and an element at the outer edge (2606a and 2608a) of the full aperture.
  • a center element 2606 and 2608
  • an element at the outer edge 2606a and 2608a
  • RPE 2606 For the center channel, RPE 2606 is held low for the maximum delay time needed. This allows the write point 2504 to move ahead while the read point 2506 stays put. After the delay time is reached, RPE 2606 is set high (true) to allow the read pointer 2506 to advance at the same rate as the write pointer 2504. Since there is no dynamic focus required for the center channel, RPE 2606 remains high for the remainder of the acquisition line.
  • the center channel CE signal 2608 brings the channel active shortly after the delay time is reached. The offset is to allow the shift register and register used for the interpolation filter to fill. The CE signal 2608 then removes the clear on the output register so the channel's data can enter the summation bus.
  • RPE 2606a For the outer channel, RPE 2606a is held low for only a short time, since its initial delay is much shorter than the center channel. Then RPE 2606a is set high, allowing this delay to be maintained until the channel is made active. At that time, the RPE signal 2606a is set low for a single clock cycle occasionally to implement the dynamic focus pattern.
  • the CE signal 2608a removes the clear on the output register when the channel can participate in the summation.
  • the interpolation filters provide the fine delay resolution for beamforming.
  • two eight point FIR filters are applied - one to generate the analytic signal I sample, and the other to generate the Q sample. This means that the interpolation filter operates twice per period of the center frequency, or at the Sample CIk (Fc * 2) rate.
  • the I and Q samples are output in succession to the output register, which if enabled, feeds the samples into the summation bus.
  • the input for the interpolation filters comes from the read address of the dual port ram 2502, which typically advances by one sample (I or Q) for each Sample CIk (Fc*2).
  • the sample is input to an eight sample shift register 2508, which holds the last eight samples read. If the read operation of the dual port ram 2502 is not enabled (RPE low), then no data enters the shift register 2508, and the read pointer 2506 is not advanced.
  • the shift register 2508 still holds the last eight samples, and no samples are lost when the read pointer 2506 does not advance; the read pointer 2506 simply falls further behind the write pointer.
  • the samples in the shift register 2508 are transferred in parallel to the inputs of the interpolation filter multipliers 2510. There they remain for the two multiply/accumulate operations that generate the I and Q outputs.
  • the samples moved to the multiplier inputs shift forward in time by two samples for each center frequency period.
  • the filter then outputs an I and Q sample for each period of the center frequency.
  • the read cycle of the dual port ram is disabled, and the samples moved to the multiplier inputs shift forward by only one sample. This allows the interpolation point to move forward in time by less than a full period of the center frequency. With dynamic focus on an outer channel, the interpolation point is gradually moving back in time, towards the same time as the center channel.
  • the coefficients used by the interpolation filters are stored in a small ram 2512, which can be loaded by the system CPU.
  • the ram 2512 can hold 32 sets of coefficients, 16 for the I interpolation point and 16 for the Q interpolation point.
  • the coefficients are selected by five address lines, four of which are control lines that come from the control ram 2602. These four lines must provide a new address every other sample clock (Fc*2).
  • the other line selects the I or Q coefficient set for the interpolation point chosen, and can be toggled with the operation of the filter, producing an I and Q sample every period of the center frequency.
  • the output register 2514 for the interpolation filter holds the output samples before they enter the summation bus. This register's clear input is controlled by the CE control line. This allows a channel to be disabled from contributing to the sum bus until the interpolation output is valid.
  • FIG.25B Another way to implement the interpolation filters, phase rotation and dynamic apodization is shown in FIG.25B.
  • all digital circuit elements in the upper box 2520 which require a clock are clocked at the receive frequency clock.
  • All digital circuit elements in the lower box 2522 which require a clock are clocked at twice the receive frequency clock.
  • the input FQ data from the analogue to digital converters (ADCs) 2524, 2526 are written to separate FFOs 2528, 2530.
  • the samples output from the ADCs 2524, 2526 may undergo an offset correction in which a predetermined constant value is added.
  • the samples from the output of the ADC offset correction stage 2524, 2526 are stored simultaneously into the FIFOs 2528, 2530, so the writing of the new sample into the FIFOs does not require separate timing logic.
  • AU the channels share the same write enable signals.
  • the read side of the FIFO of each I and Q channel 2528, 2530 uses independent read enable signals 2532, 2534, controlled by receive delay signals generated by the Beamfo ⁇ ner Controller.
  • the start of the read enable signals 2532, 2534 of each FIFO is delayed by a number of receive clock cycles equal to the initial coarse delay value 2536 required for each channel. If the read enable signal 2532, 2534 is held low while data is written into the FIFO 2528, 2530, the read out of the FIFO will be suspended and the coarse delay 2536 will increase. When the read enable signal 2532, 2534 goes high, the coarse delay 2536 that is applied remains constant. To align echoes in the signals from the center and the outer edge of the aperture, the center signals will be delayed a period of time before they can be summed with the signals from the outer edge. The delay value for sampled data at the center of the aperture is greater than that of the outer edges.
  • Dynamic receive focusing requires a control signal DF 2538 which goes high when the interpolation filter index 2540 needs to be changed.
  • the interpolation filter index 2540 is a modulo 16 number ranging from 0 to 15.
  • the interpolation filter index 2540 will decrease when the interpolation point has shifted by 1/16 wavelength.
  • the FIFO read enable signal 2532, 2534 will go low for one clock cycle, to increase the coarse delay 2536 by one.
  • the fine delay is implemented by interpolation.
  • the interpolation filters are implemented as systolic FIR filters with 4 taps 2542, 2544, 2546, 2548.
  • Each interpolation point has 4 coefficients 2550, 2552, 2554, 2556.
  • the same interpolation filter can be used for both the I and Q samples.
  • Different sets of coefficients are used for the I and Q interpolation, since the I and Q samples acquired by the ADCs are sampled at different points in time but are interpolated to the same point in time.
  • the interpolation filter index for the Q samples will be offset from that of the I samples by 4.
  • the coefficients used in the interpolation filter can alternate between I coefficients and Q coefficients by switching the address of the RAM 2558 which stores the coefficients.
  • the interpolation filter indices are represented by the address counters 2560 for the coefficients.
  • the address counters 2560 for the I and Q coefficients decrease by one when the DF signal 2538 goes high for one clock cycle.
  • the output of the interpolation filter 2560 is ⁇ /Q interleaved.
  • the interpolated signals are fed to the phase rotation stage 2564, 2566 shown in Figure 25B.
  • Sine and cosine coefficients are stored in RAMs as look-up-tables 2572, 2574. There are 16 sets of sine and cosine values.
  • the addresses of the cosine and sine look up tables (LUT) 2572, 2574 are updated at the same time as the interpolation filter coefficients 2550, 2552, 2554, 2556.
  • the phase rotation circuit 2564, 2566 also operates at twice the center frequency. Every second operating cycle produces a pair of valid Ir and Qr data.
  • the outputs of the phase rotation 2568, 2570 are multiplied by a factor which is dynamically changed during receive. Also if the multiplication factor in a channel is set to zero, the channel does not contribute to the aperture. This way, dynamic aperture updating is achieved. I and Q samples are interleaved through a multiplexer (MUX) 2572 to a common multiplier, which reduces the multiplier resources required.
  • MUX multiplexer
  • interpolation filters for beamforming allows multi-line scanning.
  • multi-line scanning several receive lines are reconstructed in the same transmit beam, as shown in FIG.38.
  • the transmit beam is typically broadened with a large depth of field to cover the region where the receive lines will be acquired.
  • the interpolation filter delay implementation allows all lines to be processed concurrently. This method works with bandwidth sampling, where the interpolation filters can be operated at a rate higher than the sample rate, as is shown in the exemplary conceptual implementation of the interpolation filter method for an individual channel in FIG. 39.
  • the digital samples from an individual receive channel's A/D converter are sent through a variable length shift register 3902 to implement a coarse delay of an integer number of samples.
  • the output of the variable length shift register 3902 is then sent to a second shift register 3904, where the individual shift stages can be accessed.
  • the interpolation filter can operate on a subset of samples, which for the example shown is eight samples.
  • the interpolation filter provides the fine delay for 1/16 wavelength or better resolution. In the example above, the interpolation filter provides an interpolated sample 1 between cells 4 and 5 of the filter shift register.
  • the interpolation filter is operated three times for every sample shift, hi the example of FIG. 40, the filter window is offset from the nominal position by one sample backwards for the first receive line, and one sample forward for the third receive line.
  • the position of the filter windows for each line is programmable. In situations where the delay differences are greater than one sample, the filter shift register can be expanded to allow greater separations between windows. For bandwidth sampling, where there are only one or two samples per wavelength, the filter windows would often not need to be separated by more than one sample period.
  • the output of the filter operations is time multiplexed into a single output stream. This stream is summed with the contributions from other channels to produce the beamformer output. Note that for 3-1 multi-line the summation circuitry is capable of operating at three times the sample rate. The summation output of the beamformer can then be de- multiplexed to generate the three multi-line receive lines for downstream processing. The downstream processing is capable of processing three lines in the acquisition time of a single ray line.
  • the output is a digital data stream of samples representing the sampled RF data along a reconstruction line.
  • This stream is derived by summing the data samples from all receive channels that participate in the receive active aperture.
  • the RF data stream can be captured in a buffer with sufficient storage to hold an entire ray line. This same buffer can be used for synthetic aperture acquisitions, and can be summed with the RF data from the second half of the receive aperture as it exits from the summation circuitry.
  • the summed RF data stream exits the beamformer as a raw RF stream.
  • This data stream can be converted to a different format using a pair of complimentary 90 degree phase difference filters, often referred to as Hubert transform filters. These filters effectively band-pass the RF signal and down-convert it at the same time to baseband quadrature data streams. These baseband I and Q data streams can then be combined to provide echo amplitude data for 2D imaging, or processed further for Doppler blood flow detection.
  • the Hubert transform filters can also be used to selectively filter and process a portion of the received signal spectrum, as is needed for harmonic imaging, or frequency compounding. In the case of frequency compounding, the filters can be time multiplexed to produce interleaved output samples from different frequency bands of the spectrum.
  • the beamformer module 3102 can also comprise a beamformer control.
  • the beamformer uses some sort of controller.
  • the controller can be implemented as a simple state machine, which specifies a series of beamformer events.
  • Each beamformer event can specify a transmit action, a receive action, and/or a signal processing action.
  • Transmit actions specify all the parameters associated with transmitting pulses from the array. These include the duration of connection of the pulsers to the desired elements in the array, the delay times of each pulser, the transmit waveform characteristics, and the transmit aperture apodization function.
  • Receive actions specify all parameters associated with receiving and beamforming the returning echoes.
  • the signal processing actions specify what to do with the summation output, such as buffering it for synthetic aperture or sending it to the Hubert transform filters.
  • the Hubert transform filters are specified to perform whatever action is needed for the beamformer event.
  • the control of the beamforming process can be complex, and a method of handling this complexity is to encode all the information prior to realtime scanning in memory blocks used to control the hardware.
  • the beamformer controller's task is then reduced to 'pointing' to the appropriate portion of the memory block to retrieve the information needed for a beamformer event.
  • Setting up the beamformer for a specific mode of operation is then accomplished by loading all the memory blocks with parameter information, then programming the various beamformer events with their respective pointers into the controller's state machine.
  • the controller is then told to run, and steps through the events for an entire frame of acquisition data. At the end of the frame, the controller looks for a stop signal, and if none is found, repeats the whole sequence again.
  • Embodiments of the exemplary ultrasound system are capable of very high acquisition frame rates in some modes of operation, in the range of several hundred frames per second or higher.
  • exemplary embodiments process displayed ultrasound image information at 30 fps or lower, even if the acquisition rates are much higher through the use of asynchronous processing as described in reference to FIG.28. It is to be appreciated, however, that for Nyquist sampled data, the storage is increased by 50 - 100%.
  • the signal processing hardware/software has random access to the RF memory buffer, and accesses the RF data from a single acquisition frame to produce the displayed estimate data.
  • the maximum frame rate for signal processing and display is 30fps, which is typically set by a timer, which signals the signal processing task every l/30 th of a second.
  • the signal processing/display task waits for the next 1/30 of a second time tick. At that time, the signal processing task reads the 'Last Frame' pointer from the Write Controller to see if a new frame is available.
  • the 'Last Frame' pointer If the 'Last Frame' pointer has not advanced from the previously processed frame, signal processing does nothing, and waits for the next 1/30 of a second tick. If the 'Last Frame' pointer has changed, signal processing begins on the frame indicated by the pointer. In this manner, signal processing always starts on a 1/30 second tick, and always works on the most recent frame acquired. If acquisition is running much faster than 30fps, then the 'Last Frame' pointer will advance several frames with each signal processing action. After the system has been put in freeze, the RF frames stored in the memory buffer can be processed at any desired rate, up to the original acquisition rate.
  • Synthetic aperture beamforming is also supported by this memory buffer scheme.
  • the various lines which make up the synthetic aperture are acquired into the memory buffer sequentially, so that the size of an RF storage frame increases.
  • This is simply a different parameter for the Write Controller, which keeps track of how many lines are written per acquisition frame.
  • signal processing then combines the multiple RF lines in a synthetic aperture to produce the final result.
  • the RF data for cineloop playback also provides for re-processing the data in different ways, bringing out new information. For example, the wall filters for color flow imaging can be changed during playback, allowing optimization for the specific flow conditions.
  • the buffer memory can dumped to an external storage device, providing multiple frames of RF data to analysis.
  • the buffer memory can be loaded with test RF data from the CPU, allowing debug, analysis and verification of the signal processing methods.
  • down-converted quadrature sampled data is derived from the RF data for amplitude detection and Doppler processing.
  • This can be obtained with complimentary phase FIR filters that are designed to have a 90 degree phase difference over the frequencies in the pass band. These filters can also down-convert the sample stream to a lower sample rate, provided the output sample rate is still sufficient to sample the range of frequencies in the signal.
  • the filters operate on RF data that is shifted by an integral number of cycles of the center frequency of the spectrum. Alternately, different filters can be designed for non-integer number of cycle shifts to obtain smaller decimation ratios.
  • a schematic design of an exemplary Hubert filter, as are known in the art by one of ordinary skill, is shown in FIG.41.
  • the filters are designed by first computing a low pass filter designed using a windowing method.
  • the filter length should be around 40 taps to insure a good response over a broad range of frequencies, and should be a multiple of the number of samples in the period of the center frequency of the RF data. For example, if the sample rate is 120MHz and the center frequency is 30MHz, there are 4 samples in the period of the center frequency and an appropriate filter length would be 40 taps (10 periods).
  • the low pass coefficients are then multiplied by a sine and cosine function, whose frequency matches the center frequency. In the 3 OMHz example, each period of the sine and cosine function has 4 samples.
  • the filters are applied on samples that are shifted by an integral number of cycles of the center frequency.
  • the RF samples are shifted by 4 samples at a time, leading to a decimation ratio of 4 to 1. With this decimation ratio, the input signal is restricted to 100% bandwidth, otherwise, aliasing of the output samples will result.
  • the filters can use alternate coefficient sets to preserve the phase information.
  • two sets of coefficients are used - one for 0 degrees phase, and another for 180 degrees phase.
  • These alternate coefficient sets are obtained by sampling the sine and cosine at the appropriate phase increments before multiplying with the low pass filter coefficients.
  • the shift between output samplers is 1 A the period of the center frequency
  • a simple method to provide the decimation rate is to leave the coefficients the same, and invert the sign of the filter output for the 1/2 period increments.
  • the pass band characteristics of the filters can be modified using different windowing functions. This may be desirable in applications such as harmonic imaging or tracking filters.
  • Frequency compound can be achieved without additional filters for high decimation ratios, provided that the filters can operate at the input sample rate.
  • the filters can operate at the input sample rate.
  • two filters can be used with different center frequencies that operate on RF data at two sample shift increments.
  • the filter block output a different filter result every two samples.
  • the two interleaved I,Q samples from the different filters are then detected and summed together to produce a 4 to 1 decimated detected output.
  • Example 3 An embodiment of the exemplary system interface to an array with up to 256 elements may be used to obtain ultrasound images.
  • Table 4 shows exemplary depth range, field of view, frame rate in B-Mode and frame rate in color flow imaging (CFI) for acquiring images.
  • These operating parameters can be used for the particular small animal imaging application described in the far left column. As would be clear to one skilled in the art, however, other combinations of operating parameters can be used to image other anatomic structures or portions thereof, of both small animal and human subjects.
  • a small animal subject is used and the animal is anesthetized and placed on a heated small animal platform.
  • ECG electrodes are positioned on the animal to record the ECG waveform.
  • a temperature probe is positioned on the animal to record temperature.
  • the important physiological parameters of the animal are thereby monitored during imaging.
  • the anesthetic used may be a for example isoflourane gas or another suitable anesthetic.
  • the region to be imaged is shaved to remove fur.
  • an ultrasound conducting gel is placed over the region to be imaged.
  • the ultrasound array is placed in contact with the gel, such that the scan plane of the array is aligned with the region of interest. Imaging can be conducted "free hand” or by mounting the array onto a fixture to hold it steady.
  • B-Mode frame rates are estimated for the different fields of view indicated in Table 4. Higher frame rates are achievable with reduced field of view.
  • Color flow imaging (CFI) frame rates are estimated for the indicated color box widths, with line density one-half that of B-mode, and with the B-mode image acquired concurrently.
  • a mouse heart rate may be as high as 500 beats per minute, or about 8 beats per second. As the number of frames acquired per cardiac cycle increases, the motion of the heart throughout the cardiac cycle can be more accurately assessed.
  • the frame rate should be at least 10 frames per cardiac cycle, and preferably 20 for better temporal resolution. Therefore, in one embodiment frames are acquired at a rate of at least 160 frames per second, with a field of view large enough to include a long axis view of the mouse heart and surrounding tissue (10-12 mm). For example, using a 30 MHz linear array, the frame rate for a 12 mm field of view is about 180 frames per second. For smaller fields of view, the frame rates used are higher; (e.g., for a 2 mm field of view, with the 30 MHz linear array frame rates of over 900 frames per second can be used for viewing rapidly moving structures such as a heart valve).
  • the maximum velocities present in the mouse circulatory system (in the aorta) maybe as high as 1 m/s in normal adult mice, but in pathological cases can be as high as 4-5 m/s.
  • the Pulse Repetition Frequency (PRF) for PW Doppler must be relatively high.
  • PW Doppler mode PRFs as high as 150 KHz are used, which for a center frequency of 30 MHz and a Doppler angle of 60°, allows for unaliased measurement of blood velocities of 3.8 m/s.
  • the frame rate for B-Mode Imaging is determined by the two-way transit time of ultrasound to the maximum depth in tissue from which signals are detected, the number of lines per frame, the number of transmit focal zones, the number of lines processed for each transmit pulse and the overhead processing time between lines and frames. Images obtained with different transmit focal zone locations can be "stitched" together for improved resolution throughout the image at the expense of frame rate, which will decrease by a factor equal to the number of zones.
  • Multi- line processing which involves the parallel processing of ultrasound lines, can be used to increase frame rate.
  • PW Doppler features include a PRF range from about 500 Hz to about 150 KHz, alternate transmit frequency selection, the selection of range gate size and position, the selection of high- pass filter cut-off, and duplex mode operation in which a real-time B-Mode image is displayed simultaneously with the PW Doppler mode may be the same as the transmit frequency used in B- Mode, or it may be different.
  • the ability to steer the PW Doppler beam is dependent on the frequency and pitch of the array used, and the directivity of the elements in the array, as would be appreciated by one skilled in the art. For an array with a pitch of 75 microns and operating in PW Doppler mode at a transmit frequency of 24 MHz, the beam may be steered up to approximately 20°. For this array, larger steering angles would result in unacceptably large grating lobes, which would contribute to the detection of artifactual signals.
  • Color flow imaging can be used to provide estimates of the mean velocity of flow within a region of tissue.
  • the region over which the CFI data is processed is called a "color box.”
  • B-Mode data is usually acquired nearly simultaneously with the Color Flow data, by interleaving the B-Mode lines with Color Flow lines.
  • the Color Flow data can be displayed as an overlay on the B-Mode frame such that the two data sets are aligned spatially.
  • CFI includes a PRF range from about 500 Hz to about 25 to 75 KHz, depending on the type of array. With 40 MHz center frequency and 0° angle between ultrasound beam axis and velocity vector, maximum unaliased velocity will be about 0.72 m/s.
  • Beam steering can depend on the characteristics of the array, (specifically the element spacing), the transmit frequency, and the capabilities of the beamformer; e.g., steering may not be available at the primary center frequency, but may be available at an alternate (lower) frequency. .
  • the beam can be steered up to approximately 20°. Larger steering angles would result in unacceptable grating lobe levels.
  • Color flow imaging features can include the selection of the color box size and position, transmit focal depth selection, alternate frequency selection, range gate size selection, and selection of high pass filter cut-off.
  • Power Doppler is a variation of CFI which can be used to provide estimates of the power of the Doppler signal arising from the tissue within the color box.
  • Tissue Doppler mode is a variation of CFI in which mean velocity estimates from moving tissue are provided.
  • Multi-line processing is a method which may be applied to the CFI modes, in which more than one line of receive data is processed for each transmit pulse transmitted.
  • the beamformer may be capable of supporting modes in which 2-D imaging and Doppler modes are active nearly simultaneously, by interleaving the B-Mode lines with the Doppler lines.
  • 3-D imaging as known to one of ordinary skill in the art, utilizes mechanical scanning in elevation direction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

A system for acquiring an ultrasound signal comprises a signal processing unit adapted for acquiring a received ultrasound signal from an ultrasound transducer having a plurality of elements. The system is adapted to receive ultrasound signals having a frequency of at least 20 megahertz (MHz) with a transducer having a field of view of at least 5.0 millimeters (mm) at a frame rate of at least 20 frames per second (fps). The signal processing can further produce an ultrasound image from the acquired ultrasound signal. The transducer can be a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, or a curved array transducer.

Description

HIGH FREQUENCY AIiRAY ULTRASOUND SYSTEM
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefitof U.S. Provisional Patent Application No. 60/733,091 filed November 2, 2005, and the benefit of U.S. Provisional Patent application No. 60/733,089 filed November 2, 2005, both of which are fully incorporated herein and made a part hereof.
BACKGROUND OF THE INVENTION
Ultrasound echography systems using an arrayed transducer have been used in human clinical applications where the desired image resolution is in the order of millimeters. Operating frequencies in these clinical systems are typically below 10 MHz. With these low operating frequencies, however, such systems are not appropriate for imaging where higher resolutions are needed, for example in imaging small animals such as mice or small tissue structures in humans.
Moreover, small animal imaging applications present several challenging requirements which are not met by currently available imaging systems . The heart rate of an adult mouse may be as high as 500 beats per minute, so high frame rate capability may be desired. The width of the region being imaged, the field of view, should also be sufficient to include the entire organ being studied.
Ultrasound systems for imaging at frequencies above 15 MHz have been developed using a single element transducer. However, arrayed transducers offer better image quality, can achieve higher acquisition frame rates and offer other advantages over single element transducer systems.
The embodiments according to the present invention overcome many of the challenges in the current art, including those described above.
SUMMARY OF THE INVENTION
Provided herein is a system and method for acquiring an ultrasound signal comprised of a signal processing unit adapted for acquiring a received ultrasound signal from a ultrasound transducer having a plurality of elements. The system can be adapted to receive ultrasound signals having a frequency of at least 15 megahertz (MHz) with a fixed transducer having a field of view of at least 5.0 millimeters (mm) at a frame rate of at least 20 frames per second (fps). The signal processing unit can further produce an ultrasound image from the acquired ultrasound signal. The transducer can be, but is not limited to, a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, or a curved array transducer. The system can include such a transducer or be adapted to operate with such a transducer.
Also provided herein is a system and method for acquiring an ultrasound signal comprising a processing unit for acquiring received ultrasound signals from an ultrasound transducer operating at a transmit and receive frequency of at least 15 MHz, wherein the processing unit comprises a signal sampler that uses quadrature sampling to acquire the ultrasound signal.
Additional advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments according to the invention and together with the description, serve to explain the principles of the invention: FIG. 1 is a representation in block diagram form of a computing operating environment;
FIGS. 2A-2C, are exemplary top, bottom and cross-sectional views of an exemplary schematic PZT stack of the present invention, the top view showing, at the top and bottom of the PZT stack, portions of the ground electric layer extending outwardly from the overlying lens; the bottom view showing, at the longitudinally extending edges, exposed portions of the dielectric layer between individual signal electrode elements (as one will appreciate, not show in the center portion of the PZT stack are the lines showing the individualized signal electrode elements - one signal electrode per element of the PZT stack);
FIG. 3 A is a top plan view of an interposer for use with the PZT stack of FIGS. 2A-2C, showing electrical traces extending outwardly from adjacent the central opening of the transducer and ground electrical traces located at the top and bottom portions of the interposer, showing a dielectric layer disposed thereon a portion of the surface of the interposer, the dielectric layer defining an array of staggered wells positioned along an axis parallel to the longitudinal axis of the interposer, each well communicating with an electrical trace of the interposer, and further showing a solder paste ball bump mounted therein each well in the dielectric layer such that, when a PZT stack is mounted thereon the dielectric layer and heat is applied, the solder melts to form the desired electrical continuity between the individual element signal electrodes and the individual trances on the interposer- the well helping to retain the solder within the confines of the well;
FIG. 3B is a partial enlarged view of the staggered wells of the dielectric layer and the electrical traces of the underlying interposer of FIG. 3 A, the well sized to accept the solder paste ball bumps;
FIG. 4A is a top plan view of the PZT stack of FIG. 2A mounted thereon the dielectric layer and the interposer of FIG. 3 A;
FIG. 4B is a top plan view of the PZT stack of FIG. 2A mounted thereon the dielectric layer and interposer of FIG. 3 A, showing the PZT stack as a transparent layer to illustrate the mounting relationship between the PZT stack and the underlying interposer, the solder paste ball bumps mounted therebetween forming an electrical connection between the respective element signal electrodes and the electrical traces on the interposer; FIG. 5A is a schematic top plan view of an exemplary circuit board for mounting the transducer of the present invention thereto, the circuit board having a plurality of board electrical traces foπned thereon, each board electrical trace having a proximal end adapted to couple to an electrical trace of the transducer and a distal end adapted to couple to a connector, such as, for example, a cable for communication of signals therethrough;
FIG. 5B is a top plan view of an exemplary circuit board for mounting of an exemplary 256-element array having a 75 micron pitch;
FIG. 5C is a top plan view of the vias of the circuit board of FIG. 5B that are in communication with an underlying ground layer of the circuit board; FIG. 6 is a top plan view of a portion of the exemplified circuit board showing, in Region
A, the ground electrode layer of the transducer wire bonded to an electrical trace on the interposer, which is, in turn, wire bonded to ground pads of the circuit board, and further showing, in Region B, the individual electrical traces of the transducer wire bonded to individual board electrical traces of the circuit board; FIG. 7A is a partial enlarged cross-sectional view of Region A of FIG. 6, showing the dielectric layer positioned about the solder paste ball bumps and between the PZT stack and the interposer;
FIG. 7B is a partial enlarged cross-sectional view of Region B of FIG. 6, showing the dielectric layer between the PZT stack and the interposer; FIGS. 8A and 8B are partial cross-sectional views of an exemplified transducer mounted to a portion of the circuit board;
FIG. 9 is an enlarged partial view Region B of an exemplified transducer mounted to a portion of the circuit board;
FIG. 10 is a partial enlarged cross-sectional view of a transducer that does not include an interposer, showing a solder paste ball bump mounted thereon the underlying circuit board, each ball bump being mounted onto one board electrical trace of the circuit board, and showing the PZT stack being mounted thereon so that the respective element signal electrodes of the PZT stack are in electrical continuity, via the respective ball bumps, to their respective board electrical trace of the circuit board; FIG. HA is a partial enlarged cross-sectional view of FIG. 10, showing the ground electrode layer of the transducer without an interposer wire bonded to ground pads of the circuit board; FIG. 1 IB is a partial enlarged cross-sectional view of FIG. 10, showing the ball bump disposed therebetween and in electrical communication with the electrical trace of the circuit board and the element signal electrode of the PZT stack;
FIG. 12A is a schematic showing the flex circuit board and a pair of Samtec BTH-090 connectors mounted to a rigid portion of the circuit board;
FIG. 12B is an exemplary pin-out table for the connector shown in FIGS. 5B and 12A;
FIG. 13 is a schematic showing a side view of the individual coaxial cables that are to be operatively coupled to the pair of Samtec BTH-090 connectors on the flex circuit board via a pair of BSH-090 connectors; FIG. 14 is a schematic showing an exemplary plan view of half of the coaxial leads therein the cable connected to one of the BSH-090 connectors;
FIG. 15 A is an illustration of an exemplary plan view of the distal end of a medical cable assembly connected to the folded flex circuit board, the cable's proximal end (not shown) may include a multi-pin ZIF connector that interfaces with the ultrasound system and maybe used to practice one or more aspects of the present invention;
FIG. 15B illustrates an exemplary termination pin-out for the individual coax cables of a medical cable assembly to a multi-pin ZIF connector having an exemplary ZIF connector such as an ITT Cannon DLM6 connector;
FIG. 16 is a block diagram illustrating an exemplary high frequency ultrasonic imaging system; .
FIG. 17 is a block diagram further illustrating the exemplary high frequency ultrasonic imaging system shown in FIG. 16;
FIG. 18a is a schematic diagram illustrating exemplary receive beamformers, transmit beamformers, front end electronics, and associated components; FIG. 18b is an exemplary embodiment providing additional detail of the front end electronics shown in FIG. 18a;
FIG. 18c is an exemplary embodiment of a receive controller (RX controller) in an embodiment according to the present invention
FIG. 18d is an illustration of an exemplary transmit controller (TX controller) in an embodiment according to the present invention
FIG. 19 is a system signal processing block diagram illustrating an exemplary beamformer control board;
FIG. 20 is a schematic diagram of a TX/RX Switch and Pulser and related circuitry; FIG. 21 is a schematic diagram of an alternative embodiment of a TX/RX Switch and Pulser and related circuitry;
FIG. 22 is a block diagram for an exemplary transmit beamformer control;
FIGS.22A-22C illustrate how exemplary waveshape data can be used to change the fine delay, pulse width and dead time for "A" and "B" signals;
FIG. 24 illustrates a systems electronics overview of an exemplary high frequency ultrasonic imaging system;
FIG. 25 shows an exemplary single channel delay scheme for quadrature sampling;
FIG. 25B is an alternative way of implementing interpolation filters, phase rotation and dynamic apodization according to an embodiment of the invention;
FIG. 26 illustrates an exemplary control RAM for storing receive control signals;
FIG. 26A shows exemplary beamformer delay control signals for center and outer elements of an arrayed transducer;
FIG. 27 is a block diagram of an exemplary transmit/receive synchronization scheme; FIG. 27A is a block diagram of an alternate exemplary transmuVreceive synchronization scheme;
FIG. 28 illustrates an exemplary RF memory buffer for storage of beamformer output;
FIG. 29 illustrates an exemplary system software overview an exemplary high frequency ultrasonic imaging system; FIG. 30 is an exemplary main system software application overview for an exemplary high frequency ultrasonic imaging system;
FIG. 31 illustrates an exemplary modular system overview for an exemplary high frequency ultrasonic imaging system;
FIG. 32 displays an exemplary transmit frequency, half cycle on time, and pulse durations;
FIG. 33 illustrates exemplary bandwidth sampling of 30 MHz signal spectrum;
FIG. 34 illustrates an exemplary quadrature sampled sine wave at 0.9 times the sample frequency;
FIG. 34A is an exemplary illustration of the 16 sample points of FIG. 34 with respect to Q and I sampling points;
FIG. 34B is an exemplary illustration of a window of eight samples used by an exemplary FIR filter for interpolation of points 0-3, between Q and I samples;
FIG. 34C is the exemplary window of FIG. 34 moved forward by one sample in order to interpolate points 4-15;
FIG. 35 displays exemplary interpolated points for I and Q waveforms;
FIG. 36 displays exemplary quadrature samples data set for single ray line acquisition from a linear array; FIGS . 37 A and 37B display two exemplary channel signals returned from the same range point, but with a path length difference corresponding to one-half wavelength;
FIG. 38 displays 3-1 multi-line scanning with an exemplary curved array transducer;
FIG. 39 displays a conceptual implementation of an interpolation delay method;
FIG. 40 displays an exemplary 3-1 multi-line operation of an interpolation delay method; and
FIG. 41 is a schematic design of Complimentary Hubert Transform Filters.
DETAILED DESCRIPTION
The present invention may be understood more readily by reference to the following detailed description of the invention and the Examples included therein and to the Figures and their previous and following description. Before the present compounds, compositions, articles, devices, and/or methods are disclosed and described, it is to be understood that this invention is not limited to specific methods, specific components, or to particular computer architecture, as such may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the specification and the appended claims, the singular forms "a," "an" and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a processing unit," or to "a receive channel" includes two or more such processing units or receive channels, and the like.
Ranges may be expressed herein as from "about" one particular value, and/or to "about" another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent "about," it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
"Optional" or "optionally" means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not.
Aspects of the exemplary systems disclosed herein can be implemented via a general- purpose computing device such as one in the form of a computer 101 shown in FIG. 1. The components of the computer 101 can include, but are not limited to, one or more processors or processing units 103, a system memory 112, and a system bus 113 that couples various system components including the processor 103 to the system memory 112.
The system bus 113 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VES A) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus. This bus, and all buses specified in this description can also be implemented over a wired or wireless network connection. The bus 113, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the processor 103, a mass storage device 104, an operating system 105, application software 106, data 107, anetwork adapter 108, systemmemory 112, an Input/Output Interface 110, a display adapter 109, a display device 111, and a human machine interface 102, can be contained within one or more remote computing devices 114a,b,c at physically separate locations, connected through buses of this foπn, in effect implementing a fully distributed system.
The computer 101 typically includes a variety of computer readable media. Such media can be any available media that is accessible by the computer 101 and includes both volatile and non- volatile media, removable and non-removable media. The system memory 112 includes computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory 112 typically contains data such as data 107 and/or program modules such as operating system 105 and application software 106 that are immediately accessible to and/or are presently operated on by the processing unit 103.
The computer 101 may also include other removable/non-removable, volatile/non-volatile computer storage media. By way of example, FIG. 1 illustrates a mass storage device 104 which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computer 101. For example, a mass storage device 104 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like. Any number of program modules can be stored on the mass storage device 104, including byway of example, an operating system 105 and application software 106. Each of the operating system 105 and application software 106 (or some combination thereof) may include elements of the programming and the application software 106. Data 107 can also be stored on the mass storage device 104. Data 104 can be stored in any of one or more databases known in the art. Examples of such databases include, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like. The databases can be centralized or distributed across multiple systems.
A user can enter commands and information into the computer 101 via an input device (not shown). Examples of such input devices include, but are not limited to, a keyboard, pointing device (e.g., a "mouse"), a microphone, a joystick, a serial port, a scanner, and the like. These and other input devices can be connected to the processing unit 103 via a human machine interface 102 that is coupled to the system bus 113, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB). In an exemplary system of an embodiment according to the present invention, the user interface can be chosen from one or more of the input devices listed above. Optionally, the user interface can also include various control devices such as toggle switches, sliders, variable resistors and other user interface devices known in the art. The user interface can be connected to the processing unit 103. It can also be connected to other functional blocks of the exemplary system described herein in conjunction with or without connection with the processing unit 103 connections described herein.
A display device 111 can also be connected to the system bus 113 via an interface, such as a display adapter 109. For example, a display device can be a monitor or an LCD (Liquid Crystal Display). In addition to the display device 111, other output peripheral devices can include components such as speakers (not shown) and a printer (not shown) which can be connected to the computer 101 via Input/Output Interface 110.
The computer 101 can operate in a networked environment using logical connections to one or more remote computing devices 114a,b,c. By way of example, a remote computing device can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and so on. Logical connections between the computer 101 and a remote computing device 114a,b,c can be made via a local area network (LAN) and a general wide area network (WAN). Such network connections can be through a network adapter 108. A network adapter 108 can be implemented in both wired and wireless environments. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet 115. The remote computer 114a,b,c may be a server, a router, a peer device or other common network node, and typically includes all or many of the elements already described for the computer 101. In a networked environment, program modules and data may be stored on the remote computer 114a,b,c. The logical connections include a LAN and a WAN. Other connection methods may be used, and networks may include such things as the "world wide web" or Internet.
For purposes of illustration, application programs and other executable program components such as the operating system 105 are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 101, and are executed by the data processor(s) of the computer. An implementation of application software 106 may be stored on or transmitted across some form of computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example, and not limitation, computer readable media may comprise "computer storage media" and "communications media." Computer storage media include volatile and non- volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD- ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. An implementation of the disclosed method may be stored on or transmitted across some form of computer readable media.
The processing of the disclosed method can be performed by software components. The disclosed method may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices. Generally, program modules include computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosed method may also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. Aspects of the exemplary systems shown in the Figures and described herein, can be implemented in various forms including hardware, software, and a combination thereof. The hardware implementation can include any or a combination of the following technologies, which are all well known in the art: discrete electronic components, a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit having appropriate logic gates, a programmable gate array(s) (PGA), field programmable gate array(s) (FPGA), etc. The software comprises an ordered listing of executable instructions for implementing logical functions, and can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
Aspects of the exemplary systems can be implemented in computerized systems. Aspects of the exemplary systems, including for instance the computing unit 101, can be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the system and method include, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples include set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Aspects of the exemplary systems can be described in the general context of computer instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The system and method may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Among many possible applications, the described embodiments enable in vivo visualization, assessment, and measurement of anatomical structures and hemodynamic function in longitudinal imaging studies of small animals. The systems can provide images having very high resolution, image uniformity, depth of field, adjustable transmit focal depths, multiple transmit focal zones for multiple uses. For example, the ultrasound image can be of a subject or an anatomical portion thereof, such as a heart or a heart valve. The image can also be of blood and can be used for applications including evaluation of the vascularization of tumors. The systems can be used to guide needle injections.
The described embodiments can also be used for human clinical, medical, manufacturing (e.g., ultrasonic inspections, etc.) or other applications where producing an image at a transmit frequency of 15 MHz or higher is desired.
Embodiments according to the described systems can comprise one or more of the following, which are described in greater detail herein: an array transducer that can be operatively connected to a processing system that may be comprised of one or more of signal and image processing capabilities; digital transmit and receive beamformer subsystems; analog front end electronics; a digital beamformer controller subsystem; a high voltage subsystem; a computer module; a power supply module; a user interface; software to run the beamformer; a scan converter, and other system features as described herein.
An arrayed transducer used in the system can be incoiporated into a scanhead that, in one embodiment, may be attached to a fixture during imaging which allows the operator to acquire images free of the vibrations and shaking that usually result from "free hand" imaging. A small animal subject may also be positioned on a heated platform with access to anesthetic equipment, and a means to position the scanhead relative to the subject in a flexible manner. The scanhead can be attached to a fixture during imaging. The fixture can have various features, such as freedom of motion in three dimensions, rotational freedom, a quick release mechanism, etc. The fixture can be part of a "rail system" apparatus, and can integrate with the heated mouse platform.
The systems can be used with platforms and apparatus used in imaging small animals including "rail guide" type platforms with maneuverable probe holder apparatuses. For example, the described systems can be used with multi-rail imaging systems, and with small animal mount assemblies as described in U.S. Patent Application No. 10/683,168, entitled "Integrated Multi- Rail Imaging System," U.S. Patent Application No. 10/053,748, entitled "Integrated Multi-Rail Imaging System," U.S. Patent Application No. 10/683,870, now U.S. Patent No. 6,851,392, issued February 8, 2005, entitled "Small Animal Mount Assembly," and U.S. Patent Application No. 11/053,653, entitled "Small Animal Mount Assembly," which are each fully incorporated herein by reference.
Small animals can be anesthetized during imaging and vital physiological parameters such as heart rate and temperature can be monitored. Thus, an embodiment of the system may include means for acquiring ECG and temperature signals for processing and display. An embodiment of the system may also display physiological waveforms such as an ECG, respiration or blood pressure waveform.
OVERVIEW Provided herein are embodiments of a system for acquiring ultrasound signals comprising a signal processing unit adapted for acquiring a received ultrasound signal from an ultrasound transducer having a plurality of elements. The system can be adapted to receive ultrasound signals having a frequency of at least 15 megahertz (MHz) with a transducer having a field of view of at least 5.0 millimeters (mm) at a frame rate of at least 20 frames per second (fps). In other embodiments, the ultrasound signals can be acquired at an acquisition rate of 50, 100, or 200 (fps). Optionally, ultrasound signals can be acquired at an acquisition rate of 200 frames per second (fps) or higher. In other examples, the received ultrasound signals can be acquired at a frame rate within the range of about 100 fps to about 200 fps. In some exemplary aspects, the length of the transducer is equal to the field of view. The field of view can be wide enough to include organs of interest such as the small animal heart and surrounding tissue for cardiology, and full length embryos for abdominal imaging. In one embodiment, the two-way bandwidth of the transducer can be approximately 50% to 100%. Optionally, the two-way bandwidth of the transducer can be approximately 60% to 70%. Two-way bandwidth refers to the bandwidth of the transducer that results when the transducer is used both as a transmitter of ultrasound and a receiver- that is, the two-way bandwidth is the bandwidth of the one-way spectrum squared.
The processing unit produces an ultrasound image from the acquired ultrasound signal(s). The acquired signals may be processed to generate an ultrasound image at display rate that is slower than the acquisition rate. Optionally, the generated ultrasound image can have a display rate of 100 fps or less. For example, the generated ultrasound image has a display rate of 30 fps or less. The field of view can range from about 2.0 mm to about 30.0 mm. When a smaller field of view is utilized, the processing unit can acquire the received ultrasound signals at an acquisition rate of at least 300 frames per second (fps). In other examples, the acquisition rate can be 50, 100, 200 or more frames per second (fps).
In one embodiment, in which a 30 MHz center frequency transducer is used, the image generated using the disclosed systems may have a lateral resolution of about 150 microns (μm) or less and an axial resolution of about 75 microns (μm) or less. For example, the image can have an axial resolution of about 30 microns (μm). Furthermore, embodiments according to the present invention transmit ultrasound that may be focused at a depth of about 1.0 mm to about 30.0 mm. For example, the transmitted ultrasound can be focused at a depth of about 3.0 mm to about 10.0 mm. In other examples, the transmitted ultrasound can be focused at a depth of about 2.0 mm to about 12.0 mm, of about 1.0 mm to about 6.0 mm, of about 3.0 mm to about 8.0 mm, or of about 5.0 mm to about 30.0 mm.
TRANSDUCERS hi various embodiments, the transducer can be, but is not limited to, a linear array transducer, a phased array transducer, a two-dimensional (2 -D) array transducer, or a curved array transducer. A linear array is typically flat, i.e., all of the elements lie in the same (flat) plane. A curved linear array is typically configured such that the elements lie in a curved plane. The transducers described herein are "fixed" transducers. The term "fixed" means mat the transducer array does not utilize movement in its azimuthal direction during transmission or receipt of ultrasound in order to achieve its desired operating parameters, or to acquire a frame of ultrasound data. Moreover, if the transducer is located in a scanhead or other imaging probe, the term "fixed" may also mean that the transducer is not moved in an azimuthal or longitudinal direction relative to the scan head, probe, or portions thereof during operation. The described transducers, which are fixed as described, are referred to throughout as an "array," a "transducer," an "ultrasound transducer," an "ultrasound array," an "array transducer," an "arrayed transducer," an "ultrasonic transducer" or combinations of these terms, or by other terms which would be recognized by those skilled in the art as referring to an ultrasound transducer. The transducers as described herein can be moved between the acquisition of ultrasound frames, for example, the transducer can be moved between scan planes after acquiring a frame of ultrasound data, but such movement is not required for their operation. As one skilled in the art would appreciate however, the transducer of the present system can be moved relative to the object imaged while still remaining fixed as to the operating parameters. For example, the transducer can be moved relative to the subject during operation to change position of the scan plane or to obtain different views of the subject or its underlying anatomy. Arrayed transducers are comprised of a number of elements. In one embodiment, the transducer used to practice one or more aspects of the present invention comprises at least 64 elements. In one aspect, the transducer comprises 256 elements. The transducer can also comprise fewer or more than 256 elements. The transducer elements can be separated by a distance equal to about one-half the wavelength to about two times the wavelength of the center transmit frequency of the transducer (referred to herein as the "element pitch."). In one aspect, the transducer elements are separated by a distance equal to about the wavelength of the center transmit frequency of the transducer. Optionally, the center transmit frequency of the transducer used is equal to or greater than 15MHz. For example, the center transmit frequency can be approximately 15MHz, 20, MHz, 30MHz, 40MHz, 50MHz, 55 MHz or higher. In some exemplary aspects, the ultrasound transducer can transmit ultrasound into the subject at a center frequency within the range of about 15 MHz to about 80 MHz. In one embodiment according to the present invention, the transducer has a center operating frequency of at least 15 MHz and the transducer has an element pitch equal to or less than 2.0 times the wavelength of sound at the transducer's transmitted center frequency. The transducer can also have an element pitch equal to or less than 1.5 times the wavelength of sound at the transducers transmitted center frequency.
By non-limiting example, one transducer that may be used with the described system can be, among others, an arrayed transducer as described in U.S. Patent Application No. 11/109,986, entitled "Arrayed Ultrasonic Transducer," filed April 20, 2005 and published on December 8,
2005 as U.S. Patent Application Publication No.: US 2005/0272183 Al, which is fully incorporated herein by reference and made a part hereof. The transducer may also comprise an array of piezoelectric elements which can be electronically steered using variable pulsing and delay mechanisms. The processing system according to various embodiments of the present invention may include multiple transducer ports for the interface of one or more transducers or scanheads. As previously described, a scanhead can be hand held or mounted to rail system and the scanhead cable can be flexible.
Whether the system includes a transducer, or is adapted to be used with a separately acquired transducer, each element of the transducer can be operatively connected to a receive channel of a processing unit. Optionally, the number of transducer elements is greater than the number of receive channels. For example, the transducer may comprise at least 64 elements that are operatively connected to at least 32 receive channels. In one aspect, 256 elements are operatively connected to 64 receive channels. In another aspect, 256 elements are operatively connected to 128 receive channels. In yet another aspect, 256 elements are operatively connected to 256 receive channels. Each element can also be operatively connected to a transmit channel.
SAMPLING
The system can further comprise one or more signal samplers for each receive channel. The signal samplers can be analog-to-digital converters (ADCs). The signal samplers can use direct sampling techniques to sample the received signals. Optionally, the signal samplers can use bandwidth sampling to sample the received signals. In another aspect, the signal samplers can use quadrature sampling to sample the received signals. Optionally, with quadrature sampling, the signal samplers comprise sampling clocks shifted 90 degrees out of phase. Also with quadrature sampling the sampling clocks also have a receive period, and the receive clock frequency can be approximately equal to the center frequency of a received ultrasound signal but may be different from the transmit frequency. For example, in many situations, the center frequency of the received signal has been shifted lower than the center frequency of the transmit signal due to frequency dependent attenuation in the tissue being imaged. For these situations the receive sample clock frequency can be lower than the transmit frequency.
An acquired signal can be processed using an interpolation filtration method. Using the interpolation filtration method a delay resolution can be used, which can be less than the receive clock period. In an exemplary aspect, the delay resolution can be, for example, 1/16 of the receive clock period.
The processing unit can comprise a receive beamformer. The receive beamformer can be implemented using at least one field programmable gate array (FPGA) device. The processing unit can also comprise a transmit beamformer. The transmit beamformer can also be implemented using at least one FPGA device. In one aspect, 512 lines of ultrasound are generated, transmitted into the subject and received from the subject for each frame of the generated ultrasound image. In a further aspect, 256 lines of ultrasound can also be generated, transmitted into the subject and received from the subject for each frame of the generated ultrasound image. In another aspect, at least two lines of ultrasound can be generated, transmitted into the subject and received from the subject at each element of the array for each frame of the generated ultrasound image. Optionally, one line of ultrasound is generated, transmitted into the subject and received from the subject at each element of the array for each frame of the generated ultrasound image.
The ultrasound systems described herein can be used in multiple imaging modes. For example, the systems can be used to produce an image in B-mode, M-mode, Pulsed Wave (PW) Doppler mode, power Doppler mode, color flow Doppler mode, RF-mode and 3-D mode. The systems can be used in Color Flow Imaging modes, including directional velocity color flow, Power Doppler imaging and Tissue Doppler imaging. The systems can also be used with Steered PW Doppler, with very high pulse repetition frequencies (PRF). The systems can also be used in M-Mode, with simultaneous B-Mode, for cardiology or other applications where such techniques are desired. The system can optionally be used in Duplex and Triplex modes, in which M-Mode and PW Doppler and/or Color Flow modes run simultaneously with B-Mode in real-time. A 3-D mode in which B-Mode or Color Flow mode information is acquired over a 3-dimensional region and presented in a 3-D surface rendered display can also be used. A line based image reconstruction or "EKV" mode, can be used for cardiology or other applications, in which image information is acquired over several cardiac cycles and recombined to provide a very high frame rate display. Line based image reconstruction methods are described in U.S. Patent Application No. 10/736,232, now U.S. PatentNo.: 7,052,460 issued May 30, 2006 and entitled "System for Producing an Ultrasound Image Using Line Based Image Reconstruction," which is incorporated fully herein by reference and made a part hereof. Such line based imaging methods image can be incorporated to produce an image when a high frame acquisition rate is desirable, for example when imaging a rapidly beating mouse heart. In the RF acquisition mode, raw RF data can be acquired, displayed and made available for off-line analysis. In one embodiment, the transducer can transmit at a pulse repetition frequency (PRF) of at least 500 hertz (Hz). The system can further comprise a processing unit for generating a color flow Doppler ultrasound image from the received ultrasound. Optionally, the PRF is between about 100 Hz to about 150 KHz. In M-Mode or RF Mode the PRF is between about 100 Hz and about 10 KHz. For Doppler modes, the PRF can be between about 500 Hz and about 150 KHz. For M-Mode and RF mode, the PRF can be between about 50 Hz and about 10 KHz.
Exemplary Arrayed Transducer
Referring now to FIGS. 2A-15B, a circuit board according to an embodiment of the present invention is adapted to accept an exemplary transducer and that is further adapted to connect to at least one conventional connector. As noted herein, the conventional connector can be adapted to complementarily connect with a cable for transmission and/or supply of required signals. With regard to the figures, due to the fine detail of the circuit board and unless otherwise indicated, the figures are merely representative of complementary circuit boards and associated multi element arrays. FIGS. 5A-5C show various views of an exemplary circuit board for a 256 element array having a 75 micron pitch.
Referring now in particular to FIGS. 2A-4B, an exemplary transducer for use with the exemplary circuit board is illustrated. In FIGS. 2A-4B, exemplary top, bottom and cross- sectional views of an exemplary schematic PZT stack are shown. FIG.2A shows a top view of the PZT stack and illustrates portions of the ground electric layer that extend from the top and bottom portions of the PZT stack. In one aspect, the ground electric layer extends the full width of the PZT stack. FIG.2B shows a bottom view of the PZT stack. In this aspect, along the longitudinally extending edges of the PZT stack, the PZT stack forms exposed portions of the dielectric layer between individual signal electrode elements. In another aspect, the signal elements extend the full width of the PZT stack. As one will appreciate, not shown in the underlying "center portion" of the PZT stack are lines showing the individualized signal electrode elements. As one will further appreciate, there is one signal electrode per element of the PZT stack, e.g., 256 signal electrodes for a 256-element array.
FIG.3A is a top plan view of an interposer for use with the PZT stack of FIGS. 2A-C, comprising electrical traces extending outwardly from adjacent the central opening of the interposer. The interposer further comprises ground electrical traces located at the top and bottom portions of the piece.
The interposer can further comprise a dielectric layer disposed thereon a portion of the top surface of the interposer about the central opening of the piece. In this aspect, and referring also to FIG.3B, the dielectric layer defines two arrays of staggered wells, one array being on each side of the central opening and extending along an axis parallel to the longitudinal axis of the interposer. Each well is in communication with an electrical trace of the interposer. A solder paste can be used to fill each of the wells in the dielectric layer such that, when a PZT stack is mounted thereon the dielectric layer and heat is applied, the solder melts to form the desired electrical continuity between the individual element signal electrodes and the individual trances on the interposer. In use, the well helps to retain the solder within the confines of the well.
FIG.4A is a top plan view of the PZT stack shown in FIG.2A mounted thereon the dielectric layer of the interposer shown in FIG.3A. To aid in the understanding of the invention, FIG.4B provides a top plan view of the PZT stack shown in FIG. 2A mounted thereon the dielectric layer and interposer shown in FIG.3A, in which the PZT stack is shown as a transparency. This provides an illustration of the mounting relationship between the PZT stack and the underlying dielectric layer/interposer, the solder paste mounted therebetween forming an electrical connection between the respective element signal electrodes and the electrical traces on the interposer.
Referring now to FIG.5 A, a schematic top plan view of an exemplary circuit board for mounting the transducer of the present invention thereto is illustrated. In one aspect, at least a portion of the circuit board can be flexible. In one embodiment, the circuit board comprising a bottom copper ground layer and a Kapton™ layer mounted to the upper surface of the bottom copper ground layer. In one aspect, the circuit board can also comprise a plurality of underlying substantially rigid support structures. In this aspect, a central portion surrounding a central opening in the circuit board can have a rigid support structure mounted to the bottom surface of the bottom copper ground layer. In a further aspect, portions of the circuit board to which the connectors can be attached also have rigid support structures mounted to the bottom surface of the bottom copper ground layer.
The circuit board further comprise a plurality of board electrical traces formed thereon the top surface of the Kapton™ layer, each board electrical trace having a proximal end adapted to couple to an electrical trace of the transducer and a distal end adapted to couple to a connector, such as, for example, a cable for communication of signals therethrough. In one aspect, the length of the circuit forming each electrical trace has a substantially constant impedance.
The circuit board also comprises a plurality of vias that pass though the Kapton™ layer and are in communication with the underlying ground layer so that signal return paths, or signal ground paths, can be formed. Further, the circuit board comprises a plurality of ground pins. Each ground pin has a proximal end that is coupled to the ground layer of the circuit board (passing through one of the vias in the Kapton layer) and a distal end that is adapted to couple to the connector.
FIG. 5B is a top plan view of an exemplary circuit board for mounting of an exemplary 256-element array having a 75 micron pitch and FIG. 5C is a top plan view of the vias of the circuit board of FIG.5B that are in communication with an underlying ground layer of the circuit board. FIG. 5B also defines bores in the circuit board that are sized and shaped to accept pins of the connectors such that, when the connector is mounted thereon portions of the circuit board, there will be correct registration of the respective electrical traces and ground pins with the connector.
FIG. 6 illustrates a partial enlarged top plan view of a portion of the exemplified circuit board showing, in Region A, the ground electrode layer of the transducer being wire bonded to an electrical trace on the interposer, which can be, in turn, wire bonded to ground pads of the circuit board. The ground pads of the circuit board are in communication, through vias in the Kapton™ layer, with the underlying bottom copper ground layer. As illustrated, in Region B, the individual electrical traces of the transducer are wire bonded to individual board electrical traces of the circuit board. Referring now to FIG.8A, in one aspect the central opening of the circuit board underlies the backing material of the transducer. FIG.7A is an enlarged partial view Region B of an exemplified transducer mounted to a portion of the circuit board. Referring now to FIGS. 1 IA-I IB, a transducer mounting is shown that does not include an interposer to the substantially rigid central portion of the circuit board. This embodiment allows for the elimination of most of the wire bonds, m this aspect, the PZT stack is surface mounted onto the circuit board directly by, for example, means of a series of gold ball bumps. The gold ball bump means is a conventional surface mounting technique and represents another type of surface mounting techniques consistent with the previously mentioned surface mounting techniques. In this example, the rigidized central portion of the circuit board can provide the same functionality as the interposer. Wire bonds, or other electrical connection, from the ground electrode of the PZT stack to the ground of the circuit board are still required to compete the signal return of the assembled device. FIG.11A shows the ground electrode layer of the transducer (without interposer) wire bonded to the ground pads of the circuit board.
In one aspect, the gold ball bumps are applied directly onto the circuit board. Each ball bump is positioned in communication with one electrical trace of the circuit board. When the PZT stack is applied, it is aligned with the electrical traces of the circuit board and electrical continuity is made via the ball bumps. The PZT stack is secured to the circuit board by, for example and not meant to be limiting, a) use of an underfill, such as a UV curable; b) use of an ACF tape; c) by electroplating pure Indium solder onto the electrodes of either the PZT or the circuit board and reflowing the Indium to provide a solder joint between the signal electrode on the PZT and the gold ball bump on the circuit board, and the like.
An arrayed transducer can be operatively connected to the processing unit of the system using the flex circuit as shown in FIGS . 2A- 11. Referring now to FIGS . 12- 15 , the flex circuit can be operatively connected with a BTH connector. BTH connectors are common and are available in a variety of sizes. The BTH connector comprises a number of pins for mating with a BSH connector. The number of pins can be at least one greater than the number of array elements or traces of flex. For example, the number pins can be equal twice the number of array elements or corresponding traces of flex. Thus, in one example, 2x180 = 360 pins can be used for the 256 traces on the flex circuit of a 256 element array. In another example, 256 pins can be used for the exemplary 256 element array. The BSH connector can be connectively seated within the BTH. The BSH connector is operatively connected with an interface such as a printed circuit board that is terminated with a plurality of coaxial cables. A larger common cable formed from the plurality of coaxial cables can be terminated with a ZBF end for interfacing with the processing unit of the ultrasound system at a ZIF receptacle or interfacing site. One exemplary ZIF connector that can be used is a 360 Pin DLM6 ITT Cannon ZIF™ connector as available from ITT Corporation of White Plains, NY. As would be clear to one skilled in the art, however, alternative ZIF™ connectors can be used for interfacing with the processing unit and can have more or less than 360 pins.
The connection can comprise a cable or bundle of cables. The cable can connect each element of the array to the processing unit in a one-to-one relationship; that is, each element can be electrically connected with its own signal and a ground lead to a designated connection point in the processing unit whereby the plurality of individual element connections are bundled together to form the overall cable. Optionally, each individual electrical connection can be unbundled and not physically formed into a cable or cable assembly. Suitable cables can be coaxial cables, twisted pairs, and copper alloy wiring. Other connection means can be via non-pbysically connected methods such as RP links, infrared links, and similar technologies where appropriate transmitting and receiving components are included.
The individual element connections can comprise coaxial cable of a type typically used for connection array elements to processing units. These coaxial cables can be of a low loss type.
The coaxial cables typically comprise a center conductor and some type of outer shielding insulated from the center conductor and encased in an outer layer of insulation. These coaxial cables can have nominal impedances appropriate for use with an array. Example nominal impedances can be 50 ohms or more, including 50 ohms, 52 ohms, 73 ohms, 75 ohms or 80 ohms.
An exemplary medical cable for use with one or more of the ultrasound imaging systems described herein comprises a minimum of 256 coaxial cables of 40 AWG with a nominal impedance of about 75 ohms with coaxial cable lengths of about 2.0m. The length can be less than 2.0m or greater than 2.0m. The medical cable jacket length can accommodate the cable length, can include additional metal sheaths for electrical shielding and can be made of PVC or other flexible materials.
Cables and the connections for connecting an array transducer to the processing unit, including those described herein can be fabricated by companies such as Precision Interconnect - Tyco Electronics (Tyco Electronics Corporation, Wilmington, Delaware). The exemplary cable, at the proximal end, can further comprise of flex/strain relief, 12
PCBs interfacing between the coaxial cables and the ZIF™ pins, a 360 Pin ITT Cannon ZIF™ connector and actuation handle (DLM6-360 type), and a shielded casing around the connector. The exemplary cable, at the distal end, can comprise of a flex/strain relief cable terminated to two PCBs, interfacing between the coaxial cables and the flex circuit board, wherein each PCB has 1 BSH-090-01 -L-D-A Samtec Connector (Samtec, Inc., New Albany, IN) and each PCB has 75 Ohm characteristic impedance traces with cables terminated from both sides of the PCB in a staggered layout.
The cable can use a "flex circuit" method of securing and connecting a plurality of coax cables which comprise the large cable, hi an exemplary embodiment, the array has 256-elements. The array is mounted in the central region of a flex circuit. The flex circuit has two ends such that the odd numbered elements 1,3,5,7...255 are terminated on the left end of the flex with a BTH-090 connector labeled Jl , and that the even numbered elements 2,4,6,8...256 are terminated on the right end of the flex with a BTH-090 connector labeled J3. For both ends, the elements are terminated in sequence along the upper and bottom rows of their respective connectors with GND (signal return) pins evenly dispersed across the connector in a repeated pattern.
The repeat pattern is defined from the outer edge of the flex towards the central region of the flex and is as follows: 2 signal pins, GND
3 signal pins, GND
2 signal pins, GND
3 signal pins,...
..., GND 3 signal pins, GND
2 signal pins, GND 2 signal pins, GND.
A schematic showing a side view of the folded flex circuit, with the array mounted in the central array of the flex is shown in FIG.12A and an associated pin out table for the connectors on the flex circuit is shown in FIG 12B.
The flex circuit can be connected to the exemplary cable described above. The flex circuit can be connected to a Precision Interconnect -Tyco Electronics medical cable assembly.
The electrical, for example, connection from the flex to the ZIF™ connector can be made through two scanhead PCBs followed by a coax cable bundle and 12 short PCBs each with a 2x15 connector inserted into ZIF™ pins.
Each scanhead PCB (total of two) can comprise one BSH-090 connector, 128 traces (all traces with controlled impedance of for example 75 Ohms at 30 MHz) and can be terminated with 128 (40 AWG 75 Ohm) coax cables. The PCB can have outer dimensions of 0.525" by 2.344."
FIG. 13 illustrates the design of the two scanhead PCBs. FIG 14 illustrates how the PCBs can be connected to the flex circuit and illustrates the staggered nature of how the coax cable ribbons can be soldered to the PCB. There are two scanhead PCBs. The left board can be connected to the Jl connector on the flex and the right board can be connected to the J3 connector. Each scanhead PCB can have one BSH-090 connector. The pin-out for each scanhead trace can be matched to the pin out for the Jl and J3 connector.
ZIF Connector
An exemplary medical cable, as partially shown in FIG. 15 A, comprises a ZIF connector on the proximal end, the end of the cable which connects to the processing unit. One skilled in the art will appreciate that several designs of cable assemblies are possible. FIG. 15B illustrates a pin out that can be used for the exemplary ZIF Connector. The pins labeled as G are signal return pins. The pins labeled as N/C are not terminated with coaxial cables and these pins are reserved to be used as either for shielding to chassis ground or for other unspecified functions. The N/C pins can be accessible by simply removing the ZIF housing and soldering to the unused traces on any of the 12 PCBs connected to the ZIF.
The 12 individual PCBs used to connect to the ZIF connector have coax cables connected on one or both sides of the board. One edge of the PCB can have a connector suitable for insertion into the ZIF connector (Samtec SSW or equivalent) and each PCB shall have the appropriate traces and vias required to connect the correct coaxial cable to the correct ZIF pin. Each PCB can have a S amtec S S W, or equivalent, connector with two rows of 15 pins, although the number of coax cables may differ on some of the 12 PCBs as defined in the Fig 15B. The general layout of the pins on the 2x15 connector is universal and is shown in Table 1.
One of the 12 PCBs requires provisions in the trace layout to include an EEPROM as defined in Fig 15B. Two of the 12 PCBs require some of the pins to be terminated as required to provide the hard-coded PROBE ID number that will identify the particular array design included inside the array assembly.
Various connection methods can be used including connectors of various styles. For these various connection methods, the impedance can be 75 Ohms at a center frequency of 30 MHz.
Table 1
The layout of connections on the connector end of the ZIF PCB that plugs into the ITT
Connector. General Pattern
Figure imgf000027_0001
ULTRASOUND SYSTEM An exemplary embodiment of an ultrasound system 1600 according to the present invention is shown in FIG. 16. FIG. 16 is a block diagram illustrating an exemplary high frequency ultrasonic imaging system 1600. The blocks shown in the various Figures can be functional representations of processes that take place within an embodiment of the system 1600. In practice, however, the functions may be carried out across several locations or modules within the system 1600.
The exemplary system 1600 comprises an array transducer 1601, a cable 1619, and a processing unit 1620. The cable 1619 connects the processing unit 1620 and the array transducer 1601. The processing unit may comprise software and hardware components. The processing unit can comprise one or more of a multiplexer(MUX)/front end electronics 1602, a receive beamformer 1603, a beamformer control 1604, a transmit beamformer 1605, a system control 1606, a user interface 1607, a scan converter 1608, a video processing display unit 1609, and processing modules including one or more of a M-mode processing module (not shown), a PW Doppler processing module 1611, a B-mode processing module 1612, a color flow processing module 1613, a 3-D mode processing module (not shown), and a RF mode processing module 1615. The center frequency range of the exemplary system can be about 15-55 MHz or higher. When measured from the outside edge of the bandwidths, the frequency range of the exemplary system can be about 10-80 MHz or higher.
The array transducer 1601 interfaces with the processing unit 1620 at the MUX/front end electronics (MUX/FEE) 1602. The MUX portion of the MUX/FEE 1602 is a multiplexer which can electronically switch or connect a plurality of electrical paths to a lesser number of electrical paths. The array transducer 1601 converts electrical energy to ultrasound energy and vice versa and is electrically connected to the MUX/FEE 1602. The MUX/FEE 1602 comprises electronics which generate a transmit waveform which is connected to a certain subset of the elements of the array, namely the elements of the active aperture. The subset of elements is called the active aperture of the array transducer 1601. The electronics of the MUX/FEE 1602 also connects the active aperture of the array to the receive channel electronics. During operation, the active aperture moves about the array transducer 1601 , in a manner determined by components described herein.
The MUX/FEE 1602 switchably connects the elements of the active aperture to transmit and receive channels of the exemplary system. In an exemplary 256-element array transducer embodiment of the invention, there are 64 transmit channels and 64 receive channels that can be switchably connected to the active aperture of up to 64 elements. The up to 64 elements of the active aperture are contiguous. In certain embodiments of the invention, there is a separate transmit MUX and a separate receive MUX. Other embodiments of the invention share the MUX for both the transmit channels and the receive channels.
During a transmit cycle of the exemplary ultrasound system 1600, the front end electronics portion of the MUX/FEE 1602 supply a high voltage signal to the elements of the active aperture of the array transducer 1601. In one aspect, the front end electronics can also provide protection circuitry for the receiver channels to protect them from the high voltage transmit signal, as the receive channels and the transmit channels have a common connection point at the elements of the array transducer 1601. The protection can be in the form of isolation circuitry which limits the amount of transmit signal that can leak or pass into the receive channel to a safe level which will not cause damage to the receive electronics. Characteristics of the MUX/FEE 1602 include a fast rise time on the transmit side, and high bandwidth on the transmit and receive channels.
The MUX/FEE 1602 passes signals from the transmit beamformer 1605 to the array transducer 1601. In an exemplary embodiment, the transmit beamformer 1605 generates and supplies separate waveforms to each of the elements of the active aperture. In an exemplary embodiment, the waveform for each element of the active aperture is the same. In another aspect, the waveforms for each element of the active aperture are not all the same and in some embodiments have differing center frequencies. In one exemplary embodiment, each separate transmit waveform has a delay associated with it. The distribution of the delays for each element's waveform is called a delay profile. The delay profile is calculated in a way to cause the desired focusing of the transmit acoustic beam to the desired focal point. In certain embodiments, the transmit acoustic beam axis is perpendicular to the plane of the array 1601, and the beam axis intersects the array 1601 at the center of the active aperture of the array transducer 1601. The delay profile can also steer the beam so that it is not perpendicular to the plane of the array 1601. In an exemplary aspect of the present invention, a delay resolution of 1/16 can be used. Or, in other words, 1/16 of the period of the center frequency of the transmit center frequency, though other delay resolutions are contemplated within the scope of this invention. For example at a 50 MHz center frequency, the period is 20 nanoseconds, so 1/16 of that period is 1.25 nanoseconds, which is the exemplary delay resolution used to focus the acoustic beam. It is to be appreciated that the delay resolution may be different than 1/16th of a period, for example delay resolutions less than 1/16th (e.g., 1/24, 1/32, etc) as well as delay resolutions greater than 1/16 (e.g., 1/12, 1/8, etc.) are contemplated within the scope of this invention.
The receive beamformer 1603, can also be connected to elements of the active aperture of the array transducer 101 via the MUX/FEE 1602. During transmit an acoustic signal penetrates into the subject and generates a reflected signal from the tissues of the subject. The reflected signal is received by the elements of the active aperture of the array transducer 1601 and converted into an analog electrical signal emanating from each element of the active aperture. The electrical signal is sampled to convert it from an analog to a digital signal in the receive beamformer 1603. Embodiments of the invention use quadrature sampling for digitization of the received signal. During the receive cycle of the system 1600, the array transducer 1601 also has a receive aperture that is determined by the beamformer control 1604, which tells the receive beamformer 1603 which elements of the array to include in the active aperture and what delay profile to use. The receive beamformer 1603 of the exemplary embodiment is a digital beamformer.
The receive beamformer 1603 introduces delays into the received signal of each element of the active aperture. The delays are collectively called the delay profile. The receive delay profile can be dynamically adjusted based on time-of-flight - that is, the length of time that has elapsed during the transmission of the ultrasound into the tissue being imaged. The time-of-flight is used to focus the receive beamformer to a point of focus within the tissue. In other words, the depth of the receive beam is adjusted using a delay profile which incorporates information pertaining to the time-of-flight of the transmitted beam.
The received signal from each element of the active aperture is summed wherein the sum incorporates the delay profile. The summed received signal flows along the receive channel from the receive beamformer 1603 to one or more of the processing module(s) 1611, 1612, 1613, and/or 1615, including those not shown in FIG. 16), as selected by the user interface 1607 and system controls 1606, which act based upon a user input.
The beamformer control 1604 is connected to the MUX/FEE 1602 through the transmit beamformer 1605 and the receive beamformer 1603. It is also connected to the system control 1606. The beamformer control 1604 provides information to the MUX/FEE 1602 so that the desired elements of the array transducer 1601 are connected to form the active aperture. The beamformer control 1604 also creates and sends to the receive beamformer 1603 the delay profile for use with the reception of a particular beam. In embodiments of the invention, the receive delay profile can be updated repeatedly based upon the time of flight. The beamformer control 1604 also creates and sends to the transmit beamformer 1605 the transmit delay profile. The system control 1606 operates in a manner known to one of ordinary skill in the art. It takes input from the user interface 1607 and provides the control information to the various components of the system 1600 in order to configure the system 1600 for a chosen mode of operation. The scan converter 1608 operates in a manner known in the art and takes the raw image data generated from the one or more of the processing modules and converts the raw image data into an image that can be displayed by the video processing/display 1609. For some processing modes of operation, the image can be displayed without using the scan converter 1608 if the video characteristics of the image are the same as those of the display.
The processing modules, except as noted herein, function in a manner known to one of ordinary skill in the art. For the PW Doppler module 1611 and the color flow processing module 1613, the pulse repetition frequency (PRF) can be high due to the high center frequencies of embodiments of this invention. The maximum unaliased velocities which may be measured are proportional to the PRF and inversely proportional to the transmit center frequency. The PRJFs required to allow for the unaliased measurement of specific velocities given specific transmit center frequencies may be calculated in a method known to one of ordinary skill in the art. Given that the transmit center frequencies used are in the range of 15 to 55MHz, or higher, and the blood flow velocities can be as high as 1 m/s and in some cases greater than 1 m/s unaliased measurement of the Doppler signal resulting from those velocities will require the PRF for PW Doppler to be up to 150KHz. Embodiments of the invention have a PW Doppler mode which supports PRFs up to 150 KHz, which for a center frequency of 30 MHz allows for unaliased measurement of blood velocities up to 1.9 m/s in mice with a zero degree angle between the velocity vector of the moving target and the ultrasound beam axis.
In certain embodiments, the RF module 1615 uses interpolation. If the sampling method used is quadrature sampling, then the RF signal may be reconstructed from the quadrature baseband samples by zero padding and filtering, as would be known to one of ordinary skill in the art. If Nyquist sampling is used, then no reconstruction is required since the RF signal is sampled directly. In certain embodiments, the RF module 1615 reconstructs the RF signal from the quadrature samples of the receive beamformer output. The sampling takes place at the center frequency of the receive signal, but in quadrature, giving a baseband quadrature representation of the signal. The RF signal is created by first zero padding the quadrature sampled data stream, with the number of zeros determined by the desired interpolated signal sample rate. Then, a complex bandpass filter is applied to the zero padded data stream, which rejects the frequency content of the zero padded signal that is outside the frequency band from fs/2 to 3fs/2, where fs is the sample frequency. The result after filtering is a complex representation of the original RF signal. The RF signal is then passed on to the main computer unit for further processing such as digital filtering and envelope detection and display. The real part or the complex representation of the RF signal may be displayed. For example, the RF data acquired for a particular scan line may be processed and displayed. Alternatively, RF data from a certain scan line averaged over a number of pulse echo returns can be displayed, or RF data acquired from a number of different scan lines can be averaged and displayed. The scan lines to be used for acquisition of the RF data can be specified by the user based on evaluation of the B-Mode image, by placing cursor lines overlaid on the B-Mode image. A Fast Fourier Transform (FFT) of the RF data can also be calculated and displayed. The acquisition of RF data and the acquisition of B-Mode data can be interleaved so as to allow for the display of information from both modes concurrently in real time. The acquisition of physiological signals such as the ECG signal can also occur concurrently with the acquisition of RF data. The ECG waveform can be displayed while the RF data is acquired. The timing of the acquisition of RF data can be synchronized with user defined points within the ECG waveform, thereby allowing for the RF data to be referenced to specific times during a cardiac cycle. The RF data can be stored for processing and evaluation at a later time.
FIG. 17 shows ablock diagram of the system 1600 further illustrating components of an embodiment of the invention. The array transducer 1601 is connected to the front end transformer 1702 via a cable 1619. The cable 1619 comprises signal pathways from the elements of the array transducer 1601 to the front end transformers 1702. An exemplary embodiment of the cable is described herein and comprises individual micro-coax cables. In addition, connectors can be used on one or both ends of the cable 1619. In one aspect of the invention, a connector with pins equal to twice the number of elements can be used and an exemplary connector is described herein. For each element of the array transducer 1601 a signal and a ground path can be used. In other embodiments of the invention, the ground connection is shared for a grouping of elements. . Alternatively, the MUX/Front End Electronics 1702, 1703, 1704, 1708 can be located inside the housing for the linear transducer array 1601 FIG. 17 provides representative details of the circuitry for four elements of the array transducer 1601 as examples for the larger system 1600 wherein there is a front end transformer
1702 and transmit output stage 1703 for each element. For an embodiment with a 256 element array transducer 1601 , there are 256 front end transformers 1702 and transmit output stages 1703.
The front end transformers 1702 and transmit output stages 1703 are more fully described below. During receive, the electrical signal from an element of the array transducer 1601 passes through the front end transformer 1702 into the receive multiplexer 1704. The receive multiplexer 1704 selects which element and front end transformer are connected to the receive channel 1705. The receive channel 1705 comprises a low noise amplifier and a time gain control, both more fully described below. The signal then passes from the receive channel 1705 into the analog-to-digital conversion 1706 module where it is digitized. The digital received signal then passes into the receive beamformer 1707, which is a digital beamformer. In block 1707, a delay profile generated in the beamformer control is applied to the received signal. The signal from the received beamformer 1707 travels into the synthetic aperture memory 1710. The synthetic aperture memory adds the received data from two successive ultrasound lines. An ultrasound line is considered to be the data resulting from returning ultrasound echoes that is received after the transmission of an ultrasound pulse into tissue. Synthetic aperture imaging performs as one of ordinary skill in the art would understand. In part, synthetic aperture imaging refers to a method of increasing the effective size of the transmit or receive aperture. . For example, if there are 64 channels in the beamformer, during the reception of one line of ultrasound data, up to 64 transmit channels and 64 receive channels can be used. Synthetic aperture imaging will use two lines of ultrasound data, added together. The first ultrasound line can be acquired with a receive aperture which can span elements 33 to 96. The second ultrasound line is received with an aperture segmented into two blocks, located at elements 1 to 32 and 97 to 128. Both ultrasound lines use the same transmit aperture. When the 2 ultrasound lines are summed, the resulting ultrasound line is essentially the same as that which would have been received had the receive aperture consisted of 128 channels located at elements 1 to 128, provided that there is no appreciable motion of the tissue being imaged during the time required to acquire the two lines of ultrasound data. In this instance two ultrasound lines were required rather than just one, so the frame rate is lowered by a factor of two. The two receive apertures can be arranged in a different way, as long as together they form a 128 element aperture. Alternatively, the transmit aperture size can be increased while keeping the receive aperture the same. More than 2 ultrasound lines can be used to increase the aperture by more than a factor of two. The signal from the synthetic aperture memory 1710 is then stored in the RF cine buffer 1713, which is a large memory that sores many received RF lines, as controlled by the asynchronous processing control module 1714. The buffered receive signal is then read into the signal processing unit 1715 at an appropriate rate. The signal processing unit 1715 may be implemented with a dedicated CPU on the beamformer control board. The received signal passes from the signal processing unit 1715 to the computer unit 1717 where it is further processed according to the mode selected by the user. The processing of the received signal by the computer unit 1717 is generally of the type known to a person of ordinary skill in the art, with exceptions as noted herein.
In one embodiment, as shown in FIG. 17, the computer unit 1717 comprises system software configured to process signals according to the operation mode of the system. For example, the system software in the main computer unit 1717 may be configured to carry out B- Mode processes which may include, for example, preprocessing; persistence processing; cineloop image buffer; scan conversion; image pan; zoom and postprocessing. The system software in the main computer unit 1717 may also be configured to carry out processes for color flow imaging (CFI), which may include, for example, threshold decision matrix; estimate filtering; persistence and frame averaging; cineloop CFI image buffer; scan conversion; color maps and priority. The system software in the main computer unit 1717 may also be configured to carry out processes for PW Doppler, which may include, for example spectral estimation (FFT); estimate filtering; cineloop spectral data buffer; spectral display generation; postprocessing and dynamic range and audio processing. The embodiment of the system of FIG. 17 is also comprised of a user interface panel
1720. In this embodiment the user interface panel 1720 is similar to the standard user interface found on most clinical ultrasound systems. For example, the B-Mode user interface may have image format controls that include image depth; image size; dual image activate; dual image left/right select; flip image left/right; flip image up/down and zoom. Transmit controls may include transmit power (transmit amplitude); transmit focal zone location; number of transmit zones selection; transmit frequency and number of cycles. Image optimization controls may include; B-Mode Gain; TGC sliders; preprocessing; persistence; dynamic range; frame rate/resolution control and post-processing curves.
As another example of mode-dependent interface controls, a color flow imaging user interface may have image format controls that may include color flow mode select (e.g., color flow velocity, Power Doppler, Tissue Doppler); trackball; steering angle; color box position/size select (after selection trackball is used to adjust position or size); preset recall; preset menu and invert color map. Transmit controls may include transmit power (transmit amplitude); transmit focal zone location and transmit frequency. Image optimization controls may include; color flow gain; gate size; PRF (alters velocity scale); clutter filter select; frame rate/resolution control; preprocessing select; persistence; dynamic range (for Power Doppler only) and color map select.
Yet another example of a user interface is a PW Doppler user interface which may have PW Doppler foπnat controls that may include PW Doppler mode select; trackball; activate PW cursor (trackball is used to adjust sample volume position); sample volume size; Doppler steering angle; sweep speed; update (selects either simultaneous or interval update imaging); audio volume control and flow vector angle. Transmit controls may include transmit power (transmit amplitude) and transmit frequency. Spectral Display optimization controls may include PW Doppler gain; spectral display size; PRF (alters velocity scale); clutter filter select; preprocessing and dynamic range.
An exemplary M-Mode user interface may have image format controls including M-Mode cursor activation; trackball (used to position cursor); strip size and sweep speed. Transmit controls may include transmit power (transmit amplitude); transmit focal zone location; transmit frequency and number of cycles. Image optimization controls may include M-Mode gain; preprocessing; dynamic range and post-processing.
An exemplary RF Mode user interface may have, for example, RF line acquisition controls that may include RF line position; RF gate; number of RF lines acquired; RF region activate; RF region location; RF region size; number of RF lines in region; averaging; and B- Mode interleave disable. Transmit controls may include transmit power (transmit amplitude); transmit focal zone location; transmit f-number; transmit frequency; number of cycles; acquisition PRF and steering angle. Receive processing controls may include RF Mode gain; filter type, order; window type and number of lines averaged. The digital samples of the received signal are processed at a rate which is generally different from the rate at which the data is acquired. Such processing is referred to herein as "asynchronous signal processing." The processing rate is the rate at which data is displayed, typically about 30 frames per second (fps.) As one would recognize, however, the data can be displayed at a rate up to the acquisition rate or can be displayed at less than about 30 fps. The data can be acquired at much faster frame rates, in certain embodiments of the invention at about 300 frames per second, or at a speed necessary to acquire the diagnostic information desired. For example, image date of a rapidly moving anatomical structures such as a heart valve can be acquired using a faster frame rate and then can be displayed at a slower frame rate. Data acquisition rates can be less than 30 fps, 30 fps, or more than 30fps. For example, data acquisition rates can be 50, 100, 200, or 300 or more fps.
The display rate can be set such that it does not exceed that which the human eye can process. Some of the frames which can be acquired can be skipped during display, although all of the data from the receive beamformer output is stored in an RF data buffer such as the RF cine buffer 1713. The data is sometimes referred to as RF data or by the sampling method used to acquire the data, (for instance in the case of quadrature sampling, the data can also be referred to as baseband quadrature data). The quadrature or RF data is processed prior to display. The processing maybe computationally intensive, so there are advantages to reducing the amount of processing used, which is accomplished by processing only the frames which are to be displayed at the display rate, not the acquisition rate. The frames that were skipped over during display can be viewed when live imaging stops or the system is "frozen. " The frames in the RF buffer 1713 can be retrieved, processed, and played back at a slower rate, e.g., if the acquisition rate is 300 frames per second, the play back of every frame at 30 frames per second would be 10 times slower than normal, but would allow the operator to view rapid changes in the image. The playback feature is usually referred to as the "Cineloop" feature by persons of ordinary skill in the art. Images can be played back at various rates, or frame by frame, backwards and forwards.
The system 1600 shown in FIG. 17 can also comprise various items which one of ordinary skill in the art would recognize as being desirable for the function of the system, such as clocks 1712, memory, sound card and speakers, video card and display, etc. and other functional blocks as shown in FIG. 17.
FIGS. 18a and 18b provide additional detail of an embodiment of the MUX/Front End Electronics 1702, 1703, 1704, 1708 and the receive beamformer 1707 and transmit beamformer 1709 functions according to an embodiment of the present invention. In the embodiment shown in FIG. 18a, a channel, for instance a receive channel, can be connected to a node and that node is connected to, for example, four (4) elements of the array transducer 1601 through a switching circuit, or multiplexing circuit, as shown in FIG. 18a. For instance, channel 1 1801 may be switchably connected to elements numbered 1 , 65, 129, and 193 in FIG. 18a so that only one of those four elements are connected to channel 1 1801 at any given time. This, in essence, is the performance ofthe multiplexing function of the MUX/Front End Electronics 1702, 1703, 1704, 1708 during the receive cycle ofthe system 1600. The assignment of four switchably connected elements to a channel is done such that contiguous elements of any given subset of elements can comprise the active aperture. For example, if the array transducer were comprised of 256 elements, then 64 or less elements can form the subset that comprises the active aperture.
The multiplexing ofthe elements ofthe array transducer 1601 for the receive cycle can be carried out by a RX switch 1817 as shown in an exemplary diagram (FIG. 18b) ofthe front end 1802. A control signal 1818 from the beamformer control 1711 determines which RX switch 1817 is activated, thereby connecting the chosen element ofthe four (4) available elements for that module 1802 to the receive channel. As one skilled in the art would appreciate, the multiplexing scheme illustrated in FIGS. 18a and 18b can be applied to transducers of varying numbers of elements (other than 256 elements) and of varying maximum active aperture sizes (other than up to 64 elements).
The exemplary front end 1816 shown in FIG. 18b also comprises the transformer 1819 and pulser 1820, which are described in more detail below. In one aspect, the front end 1816 provides isolation of the receive channel from the transmit waveform, discussed previously herein.
The received signal from the selected array transducer element passes into the low noise amplifier (LNA) 1804. From the LNA 1804, the then amplified signal passes into time gain control (TGC) 1805. Since elapsed time is proportional to the depth ofthe received reflected signals, this is also referred to as a depth dependent gain control. In an ultrasound system, as time goes by from the transmission of an ultrasound wave, the signal passes deeper into the tissue and is increasingly attenuated; the reflected signal also suffers this attenuation. The TGC 1805 amplifies the received signal according to a time varying function in order to compensate for this attenuation. The factors which can be used to determine the time varying TGC gain are time of flight, tissue characteristics ofthe subject or subject tissue under study, and the application (e.g. imaging modality). The user may also specify gain as a function of depth by adjusting TGC controls on the user interface panel 1607. Embodiments may use, for example, an Analog Devices (Norwood, MA) AD8332 or similar device to perform the LNA 1804 and TGC 1805 functions. From the TGC 1805, the receive signal passes into the receive beamformer 1803 where it is sampled by a sampler, in this embodiment, the analog-to-digital converters 1807 and 1808. In other embodiments according to the invention only one analog-to-digital converter is used if sampling is done at a rate greater than the Nyquist rate; for instance at 2 or 3 times the Nyquist rate, where the Nyquist rate involves sampling the ultrasound signals from the individual elements at a rate which is at least twice as high as the highest frequency in the signal.
Li other embodiments of the invention, quadrature sampling is employed and two analog- to-digital converters are used, namely the "I' and the "Q" sampler. In the exemplary embodiment of the receive beamformer 1803, the receive signal is digitized in blocks 1807 and 1808 using quadrature sampling analog-to-digital converters (ADC); two ADCs are required per channel, with sampling clocks shifted 90° out of phase. The sample rate used can be the center frequency of the receive signal. For comparison, direct sampling would use a sampling rate in theory of at least twice the highest frequency component in the receive signal, but practically speaking at least three times the sampling rate is preferred. Direct sampling would use one ADC per channel.
Once sampled, the now digitized received signal passes into a Field Programmable Gate
Array (FPGA) in which various functions associated with receive beamforrning are implemented.
Within the FPGA, the digitized received signal can undergo a correction for the DC offset of the
ADC. This is implemented by the subtraction of a value equal to the measured DC offset at the ADC output. Each ADC may have a different DC offset correction value. The DC offset may be determined by averaging a number of digital samples appearing at the output of the ADC with no signal present at the receive channel input, for example, during a calibration period at system start up. The digitized signal next passes into a FIFO buffer 1822 where each sample is stored for an appropriate duration so that the appropriate delay profile can be implemented. The delay can be implemented in both coarse and fine manners. A coarse delay can be implemented by shifting the signal by one or more sample points to obtain the desired delay. For instance, if the desired delay is one sample period, then shifting by one sample in the appropriate direction provides a signal with the appropriate delay. However, if a delay of a value not equal to the sample period is desired, a fine delay can be implemented using an interpolation filter 1809. From the FIFO buffer 1822, the digitized received signal passes into the interpolation filter 1809 for the calculation of any fine delay. The interpolation filter 1809 is used in a system where the sample period is greater than the appropriate fine delay resolution. For instance, if the sample rate is the center frequency of the ultrasound signal and is 50 MHz, the sample rate is one sample every 20 nanoseconds. However, a delay resolution of 1.25 nanoseconds (1/16 of 20 nanoseconds) is used in certain embodiments to provide the desired image quality, though other delay resolutions are contemplated within the scope of this invention. The interpolation filter 1809 is used to calculate a value for the signal at points in time other than the sampled point. The interpolation filter 1809 is applied to the in-phase and quadrature portions of the sampled signal. Embodiments of the interpolation filter 1809 comprise a finite impulse response (FUR) filter. The coefficients of each filter can be updated dynamically by the beamformer control module based on the time of flight, sample by sample. After processing by the interpolation filter, a phase rotation can be applied by a multiplier 1811 multiplying the in-phase and quadrature components by the appropriate coefficients. The phase rotation is used to incorporate into the interpolated sample the correct phase relative to the ADC sample frequency. The RX controller 1810 controls the FIFO modules and the interpolation filters. The receive delay is updated dynamically, so the interpolation filter coefficients at each channel need to change at certain intervals. The delay implemented by the FIFO also needs to change at certain intervals. Also, the receive aperture size is adjusted dynamically, so each channel becomes active at a specific time during the reception of the ultrasound signal; a channel is activated by multiplying by 1 instead of 0 at the "multiply" module 1811. The multiply module 1811 can also apply a "weight" which is a value between 0 and 1, independently to each channel in the receive aperture. This process, which is known as apodization, is known to one skilled in the art. The value by which the interpolated sample is multiplied by may vary with time, so as to implement an apodized receive aperture which expands dynamically during the reception of the ultrasound signal.
FIG. 18c is an exemplary embodiment of a receive controller (RX controller) in an embodiment according to the present invention. The Receive Controller 1810 is used to program the correct delay profile, aperture size and receive apodization data into the processing block 1809 which implements the interpolation and phase rotation and apodization. The Receive Controller 1810 of FIG. 18c sets the initial parameters (Initial Coarse Delay, Initial Phase) once per start-of-line (SOL) trigger and sets the dynamic parameters (Dynamic Focus, Dynamic Apodization) once per receive clock (RXCLK) period. The initial receive delay profile is stored in RX Initial Aperture Memory 1822. The dynamic receive delay profile is stored in the RX Dynamic Aperture Memory 1824. The delay profile is loaded into the RXBF Buffer 1826 via the 64: 16 Crosspoint Switch 1828 before the SOL trigger. The crosspoint switch 1828 selects 16 of the 64 aperture channel configurations. These are used to program the 16 receive channels that are on a single Channel board. The configuration for each receive line is stored in the Line Memory 1830. Each line configuration in the Line Memory 1830 contains the Aperture Select Index, the Mode Select, and the Aperture Enable. The Aperture Select index is used to determine the Aperture to Channel mapping. The Mode Select is used to access multiple delay profiles. The Aperture Enable index controls the initial aperture size. The aperture select look-up table (AP_SEL LUT) 1832 is a method to reduce the number of possible configurations and therefore number of bits required to store in the line memory. The AP_SEL LUT 1832 is re-programmable.
The Memory Control 1834 is a state machine that decodes the line configuration. The state machine is configured by the Control and Status memory 1836. It is configured differently for different modes (e.g. B-Mode, Color Flow Mode, PW Doppler Mode, etc.). The Memory Control 1834 controls the loading of the aperture memory into the RXBF Buffer 1826 and generates the SOL_delayed and FIFO_WEN signals. The pulse SOL_delayed is used to transfer the initial delay parameters into the RX Phase Rotation and RX Apodization block 1809 in a single RXCLK period. The dynamic receive parameters are then transferred in each subsequent RXCLK period. The FIFO_WEN signal starts the receive ADC data acquisition into the FIFO for the RX interpolation filter.
The Control and Status Memory 1836 also contains common parameters such as the Receive Length. The Receive Length parameter determines how many receive samples to collect for each line. It is to be appreciated that increasing the number of receive channels allows for larger receive apertures, which can benefit deep imaging by improving lateral resolution and penetration. The synthetic aperture mode allows for apertures greater than 64 to be used, but at the expense of a reduction in frame rate. With an increase in the number of receive channels, this can be done without a frame rate penalty. In one embodiment according to the present invention, the receive beamformer 1803 allows for multi-line beamforming. Multi-line beamforming allows for higher frame rates by processing multiple receive lines in parallel. Frame rate increases by a factor equal to the number of parallel receive lines. Since beamforming occurs simultaneously for multiple receive apertures, higher data processing rates through the interpolation filters 1809 are used. The amount of data transferred from the receive beamformer to a host CPU would increases by a factor equal to the number of parallel receive lines. The transmit beam is broadened so that it overlaps the multiple receive lines.
The signal from each receive beamformer 1803 is then summed by summers 1815. The summed signal represents a received signal at a given time that is reflected from a given depth. The summed received signal is then routed through modules described earlier and shown in FIG. 17, to the appropriate processing module for the mode of operation selected by the user.
During the transmit operation cycle of the system 1600, selected transmit output stages are connected to the transmit channel in order to form the active aperture. In this aspect, the multiplexing is done prior to the transmit output stage. For example, as previously described, transmit channel 1 1801 can be switchably connected to the transmit output stages corresponding to elements numbered 1, 65, 129, and 193 in FIGS. 18a and 18b so that only one of those four transmit output stages are connected to transmit channel 1 1801 at any given time. It can also be seen in FIGS. 18a and 18b that transmit channel 2 can be switchably connected to the transmit output stages corresponding to elements 2, 66, 130 and 194, and so on. This is the performance of the multiplexing function of the MUX/Front End Electronics 1702, 1703, 1704, 1708 during the transmit cycle of the system.
Referring to FIG. 20, the transmit signal which is multiplexed is the pair of signals designated by TXA 2002 and TXB 2004, which drive the gates of the transmit pulser MOSFETs QTDN 2006 and QTDP 2008 as shown in FIG.20. These signals 2002, 2004 are unipolar signals of a sufficiently low level so that multiplexing by MOSFET type switches can be used. The assignment of four switchably connected transmit output stages to a transmit channel is done such that contiguous elements of any given subset of elements can comprise the active transmit aperture. For example, in an array transducer comprised of 256 elements, 64 or less elements can form the subset that comprises the active transmit aperture.
Optionally, the transmit multiplexing can be done after the transmit output stage using multiplexing circuitry able to accommodate a higher voltage bipolar signal.
Referring back to FIGS. 18a-l 8d, the transmit beamformer 1812 generates the transmit waveform with the specified delay present in the waveform in that the waveform is not sent until the appropriate time per the delay profile. The transmit waveform can be a low voltage signal, including a digital signal. Optionally the transmit waveform can be a high voltage signal used by the array transducer to convert electrical energy to ultrasound energy. The operation of the transformer 1819 and pulser 1820 are described in greater detail below. During the process of transmit beamforming, one or more of each of the transmit channels within the active transmit aperture can produce a transmit waveform which can be delayed relative to a reference control signal. The number of transmit channels determines the maximum transmit aperture size. The benefit of increasing the number of transmit channels is improved lateral resolution and penetration for deep imaging. In various embodiments, the array transducer has 64 transmit channels or may have 96 or 128 transmit channels. The delays can vary from channel to channel, and collectively the delays are referred to as the transmit delay profile. Transmit beamforming may also include the application of a weighting function to the transmit waveforms, a process known to one of ordinary skill in the art as "apodization." Transmit apodization uses independent control of the amplitude of the transmitted waveform at each channel. The benefit to image quality is improved contrast resolution due to a reduction in spurious lobes in the receive beam profile, which can be either side lobes or grating lobes. Each transmitter output stage can have an independently controlled supply voltage, and control hardware.
Transmit waveshaping involves the generation of arbitrary waveforms as the transmit signal, i.e., the modulation of amplitude and phase within the transmit waveform. The benefit is an improvement to axial resolution through shaping of the transmit signal spectrum. Techniques such as coded excitation can be used to improve penetration without loss of axial resolution. The transmit beamformer 1812 described herein may be implemented in one embodiment with an FPGA device. A typical implementation of a transmit beamformer 1812 which provides a delay resolution of, for example, 1/16 the transmit clock period may require a clock which is 16 times the transmit clock frequency. For the frequency range of the system described here, this would imply a maximum clock frequency of 16 times 50 MHz, or 800 MHz, and a typical FPGA device may not support clock frequencies at that rate. However, the transmit beamformer 1812 implementation described below uses a clock frequency within the FPGA of only eight (8) times the transmit clock frequency.
Each channel of the transmit beamformer is comprised of a TX controller 1814 and a Tx pulse generator 1813. The TX controller 1814 uses a parameter called, for example, an ultrasound line number (also known as a ray number), to select the active transmit aperture through the appropriate configuration of the transmit multiplexer. The ray number value identifies the origin of the ultrasound scan line with respect to the physical array. Based on the ray number, a delay value is assigned to each transmit channel in the active transmit aperture. The TX pulse generator 1813 generates a transmit waveform for each transmit channel using waveform parameters and control signals as described herein.
FIG. 18d is an illustration of an exemplary transmit controller (TX controller) in an embodiment according to the present invention. The transmit controller 1814 is used to program the TX pulse generator 1813 with the correct delay profile (coarse delay and fine delay for each channel) and transmit waveform for each line. It re-programs the TX pulse generator 1813 before each line. The sequence of lines is used to produce a 2-D image. Each line requires a certain subset of the array elements to be used to form the transmit aperture. Each array element within the aperture must be connected to a channel in the TX pulse generator 1813, and the transmit channels must be configured to produce the desired transmit waveforms with delays according to the desired transmit delay profile.
The delay profile and transmit waveform for the entire aperture is stored in the TX Aperture Memory 1838. Multiple delay profiles can be stored in the TX Aperture Memory 1838. Multiple delay profiles are required for B-Mode imaging in which multiple focal zones are used, and PW Doppler and Color Flow Imaging modes in which the Doppler mode focal depth and transmit waveforms are different than those used for B-Mode. In this exemplary embodiment, the TX Aperture Memory 1838 contains delay profile and transmit pulse wave shape data for a 64 channel aperture. On each Channel Board there are 16 transmit channels, each of which can be connected to one of four different array elements through a transmit output stage. A 64:16 crosspoint switch 1840 is used to route the correct transmit waveform data sets to each of the 16 channels. The control of the other 48 channels is implemented on the other 3 channel boards. The TXBF buffer 1842 temporarily stores the TX pulse generator data before the start of line (SOL) trigger. The TXJTRG trigger moves the data from the TXBF Buffer 1842 into the TX Pulse generator 1813 in one TXCLK period. The configuration for each transmit line is stored in the Line Memory 1844. Each line configuration in the Line Memory 1844 contains the following information: Aperture Select Index, Mode Select, Aperture Enable Index, and Element Select Index. The Aperture Select index is used to determine the Aperture to Channel mapping. The Mode Select is used to access multiple delay profiles. The Aperture Enable index controls the aperture size. The Element Select index controls which element is active in the case that there are more array elements than transmit channels or receive channels. The indexing of the Aperture Select, Aperture Enable and
Element Select look-up tables (AP_SEL LUT 1846, AP_EN LUT 1848, ES LUT 1850) is a method to reduce the number of possible configurations and therefore number of bits required to store in the line memory 1844. The look-up tables are all re-programmable. The Control and Status memory 1852 contains common parameters such as the number of transmit cycles (TX Cycles), the number of lines in the frame, and also configures the state machine in the Memory Control block 1854. Memory Control 1854 is a state machine that decodes the Aperture Select, Aperture Enable and Element Select line information. Referring to FIG. 20, it can be seen that the transmit waveform is actually two signals, referred to as the "A" and "B" signals, one of which is applied to the gate of pulser drive MOSFET QTDN 62006 and the other applied to the gate of pulser drive MOSFET QTDP 2008. The "B" signal can be identical to the "A" signal except that it is delayed by Vi the period of the transmit clock. The delay applied to each transmit waveform is divided into two components, the "coarse delay" and the "fine delay". The coarse delay can be in units of 1A of the transmit frequency period, and the fine delay can be in units of 1 / 16 the transmit frequency period, though other units of fine delay are contemplated within the scope of this invention. Other aspects of the transmit waveform which can be adjusted are the transmit center frequency, pulse width, number of cycles and the "dead time". The "dead time" is the time interval following the first half cycle of the output pulse in which neither of the two output stage MOSFETs, QTDN 2006 and QTDP 2008, are turned on. Alteration of the transmit center frequency, pulse width and dead time may be used to alter the frequency content of the final transmit signal to the transducer element.
Referring now to FIGS. 22-22C, in an embodiment according to the present invention, one transmit pulse generation circuit 2200 is used for each transmit beamformer channel. A 16 bit A waveshape word 2202 is used to encode the fine delay, pulse width and dead time for the A signal. A 16 bit B waveshape word 2203 is used to encode the fine delay, pulse width and dead time for the B signal. The waveshape words 2202, 2203 can be stored in memory within, for example, a FPGA. The frequency of the transmit output signal is determined by the frequency of the transmit clock. The control inputs come from the transmit controller 1814, which can be implemented within the FPGA. These can be the pulse count 2204, the TXTRG 2206, and various clocks, as described below, and shown in FIGS. 22-22C .
Transmit pulse generation begins when a TXTRG pulse 2206 is received from the channel control board 1814. The TXTRG signal 2206 is sent to the transmit beamformer channels, and is the signal which the transmit beamformer delays are referenced to. The TXTRG pulse 2206 begins the counting of 1A intervals of the transmit frequency clock cycle denoted by TXCLKX2 2246. The current hardware implementation uses a clock of 2 times the transmit clock. The coarse delay 2210 is implemented by a Coarse Delay counter 2248 which is clocked by a clock, TXCLKX2 2246.. The signal TXTRG 2206 causes the count to begin. A COARSE DONE signal 2208 is generated when the number of clock cycles of
TXCLKX22246 has reached the coarse delay input variable value 2210. The COARSE DONE signal 2208 enables the byte select circuit composed of multiplexers 2250 and 2252, Pulse Inversion select Circuit composed of multiplexers 2254 and 2256, and the 8: 1 parallel-to-serial circuits 2212 and 2213. The 16 bit waveshape words 2202 and 2203 are transferred into 16 bit registers 2216 and 2217. The output of the A waveshape register 2216 is composed of the Partial Waveshapes: Partial_Waveshape_A(7:0) 2260 and Partial_Waveshape_A(15:8) 2261. Partial_Waveshape_A(7:0) 2260 is transferred to the either 8:1 parallel-to-serial circuit 2212 or 8:1 parallel-to-serial circuit 2213 through the Pulse Inversion Circuit composed of multiplexers 2254 and 2256. Following the transfer of Partial_Waveshape_A(7:0) 2260, Partial_Waveshape_A(15:8) 2261 is transferred to the either 8:1 parallel-to-serial circuit 2212 or 8:1 parallel-to-serial circuit 2213 through the Pulse Inversion Circuit composed of multiplexers 2254 and 2256. The Byte Select signal 2214 controls which of Partial_Waveshape_A(7:0)2260or Partial_Waveshape_A(15:8) 2261 is multiplexed through to the Pulse Inversion Circuit. In this way, the full 16 bits of Waveshape_A 2202 is transferred to the 8:1 parallel-to-serial circuits for serialization into a one bit data stream.
As can be seen from Figure 22, the transfer of the Waveshape_B 2203 is done in a similar manner. The 8:1 parallel-to-serial circuit 2212 and 2213 have double data rate (DDR) outputs.
COARSE DONE 2208 begins the count of the number of output pulses. When the pulse number counter finishes counting the number of pulses, the Enable signal 224 goes low causing the registers 2216 and 2217 to stop outputting the Partial Waveshapes. The 16-bit waveshape of the "A" phase 2202 is converted to 1 serial bit in two TXCLKX22246 cycles. The 16-bit waveshape of the "B" phase 2203 is also converted to 1 serial bit in two TXCLKX2 2246 cycles. Pulse inversion may be achieved by swapping the "A" and "B" phases before the signals are sent to the parallel-to-serial circuits. The signal swap occurs if the Pulse Inversion signal 2258 is enabled on the Pulse Inversion MUX circuit 2254 and 2256.
The 8:1 parallel-to-serial circuit with double data rate (DDR) output is clocked with TXCLKX8 2266 which is at a frequency of 8 times the transmit clock. With DDR output, the waveshape is shifted out at a rate of 16 times the transmit clock frequency. The signals from 8:1 parallel-to-serial circuit 2212 or 8 : 1 parallel-to-serial circuit 2213 are transferred out of the FPGA using the LVDS standard before it is re- synchronized by clock TXCLKXl 6 2236.
The "A" phase signal is re-synchronized by a low jitter positive emitter coupled logic (PECL) flip-flop 2234 and a low jitter clock, TXCLKX16 2236, at 16 times the transmit frequency. This can eliminate jitter added by the circuit inside the FPGA. The "B" phase signal is also re-synchronized by flip-flop 2235.
Both the "A" and "B" signals go to respective driver circuits 2238, 2240 to increase their current drive capability. The output of the drivers become signals TXB 2004 and TXA 2002 and connect to the transmit multiplexers in the front end circuit 2000.
Re-sending of the waveshape data 2202 and 2203 continues until the pulse number counter 2242 has reached the number specified by the pulse count input variable 2204 and the enable signal 2244 changes state.
The 16 bit word which constitutes Waveshape_A 2202 may change from one transmit cycle to the next. The same applies to Waveshape_B 2203. This allows for the generation of transmit waveforms with arbitrarily specified pulse widths from one cycle to the next. Waveshape_A 2202 and Waveshape_B 2203 are specified independently. For example, either odd or even transmit waveforms may be generated.
FIGS. 22A-22C illustrate how the waveshape data can be used to change the fine delay, pulse width and dead time for the "A" and "B" signals. In this example, the "B" output is identical to the "A" output except it is delayed by 1A of the transmit frequency period. Figure 22C illustrates that arbitrary wavefoπns can be generated in the "A" phase and the "B" phase. Ny Waveshape_A may be different from the one preceding it, and any Waveshape_B may be different from one preceding it. hi the example in Figure 22C, the 16 bit waveforms used for Waveshape_Al(15:0), Waveshape_A2(15:0)and Waveshape_A3(15:0) are different from one another. In this example, the Waveshape_B(l 5 :0) is repeated twice, but it would be possible to specify that a Waveshape_B be different from the preceding Waveshape_B. The A and B waveforms are independent and can be used to implement transmit waveforms used for coded excitation methods, for example in applications involving contrast agent imaging and non-linear imaging.
The TXPower signal (shown as "TX High Voltage" in FIG. 18b) can control the amplitude of the output of the transmit pulser. As shown in this implementation, TXPower is common to all transmit channels. Optionally, the amplitude of the output pulse of each transmit channel can be controlled individually.
Fig. 19 is a system signal processing block diagram illustrating an exemplary beamformer control board 1900. The beamformer control board 1900 is an exemplary embodiment of the beamformer control and signal processing block 1716. The design and operation of the beamformer control board 1900 is generally known to one of ordinary skill in the art. Embodiments of the exemplarily system can have the capability to acquire, process and display physiological signal sources 1901 of one or more of, for example, ECG, respiration, body temperature of the subject, or blood pressure. The physiological signal acquisition block 1902 can contain signal acquisition modules that can acquire those types of physiological signals.
The data transfer to computer unit 1903 transfers data from the beamformer control board 1900 to the computer unit 1905. Embodiments can use a PCI express bus 1904, as is known in the art, for this transfer, or similar buses. FIG. 20 is an exemplary schematic 2000 of the front end circuit transformer 1702, transmit output stage 1703 and the receive MUX 1704 and the transmit MUX 1708. Other exemplary front end circuits can also be used with the described system. For example, front end circuits as described in U.S. Patent No. 6,083,164, entitled "Ultrasound Front-End Circuit Combining the Transmitter and Automatic Transmit/Receive Switch," which is fully incorporated herein by reference and made a part hereof, can be used. The exemplary circuit 2000 depicted in FIG. 20 provides the multiplexing function of connecting an element to the receive channel if that element is part of the active aperture. The front end circuit also provides isolation of the receive channel from the transmit channel, as described herein. The transmit output stage receives a transmit waveform from the transmit pulse generator 1813 and in turn combines the transmit pulse information with transmit high voltage to create a high voltage waveform at an element which is part of the active transmit aperture.
In the exemplary schematic shown in FIG. 20, transmit pulsing is effected by Dl 2010, D22012, QTDP 2008, QTDN 2006, QTXMUXP 2014, QTXMUXN 2016 and Tl 2018. During transmit, the transmit output stage which is included in the active transmit aperture is connected by turning on QTXMUXP 2014 and QTXMUXN 2016 to allow the gate drive signals, TXA 2002 and TXB 2004, to reach QTDN 2006 and QTDP 2008. During transmit pulsing, either QTDN 2006 or QTDP 2008 are turned on separately, with timing as required to produce the intended transmit waveform. The pulser output appears on the left end of the transformer secondary, LTXS 2038, while the right end is clamped near 0 V by D 1 2010 and D22012, which can be, for example, ordinary fast silicon switching diodes. During active pulsing, the receive multiplexing switch SWl 2020 can also be turned off to provide additional isolation. The amplitude of the output of the transmit pulser is determined by the transmit supply voltage applied to the center tap of the primary of Tl 2018 through Rl 2022. Two voltage supplies are available, Vl 2024 and V2 2026, where Vl 2024 is larger than V22026. They are connected to a common node at the Rl 2022 as shown through FET switches QLSH 2028, QLSL 2030 and diode D3 2032. One or the other of the supply voltages is selected by turning on either QLSH 2028 or QLSL 2030 using control signals Vl NE 2034 and V2 NE 2036. Diode D3 2032 helps prevent current from flowing from Vl 2024 to V22026 when Vl 2024 is connected to Rl 2022. This configuration allows for rapid switching of the transmit supply voltage between two levels, since it avoids the requirement to charge or discharge the supply voltage as held on voltage storage capacitors C4 and C5 .
Receive switching is effected by QTDP 2008, QTDN 2006, QLSH 2028, QLSL 2030, and SWl 2020. SWl 2020 is a receive multiplexing switch which can be a single pole single throw (SPST) or a single pole double throw (SPDT) switch of a type such as a GaAs PHEMT (gallium arsenide pseudomorphic high electron mobility transistor). Alternatively, the receive multiplexing switch may be implemented with other types of field effect transistors or bipolar transistors. If SWl 2020 is a SPDT switch it is configured as shown in FIG. 20, where one terminal is connected to a terminating resistor and the other is connected to the receive channel input. If SWl 2020 is a SPST switch, the terminal connected to the terminating resistor and the terminating resistor is deleted.
During receive intervals, the receive multiplexing switch is configured such that there is a connection between the array element and the receive channel. The pulser drive MOSFETs, QTDN 2006 and QTDP 2008, are both turned on during receive, while QLSH 2028, QLSL 2030, QTXMUXN 2016 and QTXMUXP 2014 are held off. This causes LTXS 2038 to present mainly its leakage inductance as an impedance in series with the receive signal. For received signals too small to forward bias Dl 2010 or D2 2012, these diodes present high shunt impedance, dominated by their junction capacitance. Ll 2040 and the leakage inductance LTXS 2038 are used to level the receive mode input impedance, compensating for the junction capacitance of D 1 2010, D2 2012 and the capacitance of the ganged switches forming the receive multiplexer
In an alternative implementation of the front end circuit, and as shown in FIG. 21, signal RXCLMP is eliminated and its' function performed by TXA and TXB. The transmit function on this circuit is identical to the circuit of FIG.20 with QTXMUXN and QTXMUXP gating signals TxDriveN and TXDriveP . In receive mode QTXMUXN and QTXMUXP are off thus blocking signals TXA and TXB. Resistors R8 and R9 shunt QTXMUXN and QTXMUXP so that when TXA and TXB are driven high for the duration of receive mode the voltage on the gates of QTDN and QTDP increases slowly resulting in gentle activation of these MOSFET switches. The gentle activation of QTDN and QTDP for receive mode is controlled by signal RXCLMP in the circuit of FIG.20. In FIG.21, resistors R5 and R6 pull the voltage on the gates of QTDN and QTDP to ground when transmit multiplexing switches are turned off after a transmit operation. The pulser employs a center-tapped transformer and NMOS FETs, together with a switch- selectable level supply, to generate nominally square pulses. In order to control the delivered spectrum when connected to the transducer element thru a controlled impedance coax cable, it employs series and shunt resistances. These serve to reduce the time-variation of source impedance during operation of the pulser and provide back termination of the transducer during the interval immediately following transmit pulses. Not shown in the schematic is the drive circuit for the final stage MOSFETs. This circuit, (which is on the far side of a multiplexer as describe below), may be either a discrete switching MOSFET pulse amplifier or a collection of CMOS buffers sufficient to provide the required drive.
The transformer needed for the pulser is built as windings printed on the PCB augmented by small ferrite slabs fastened onto both sides of the PCB, around the windings. This technique is amenable to automated assembly provided the ferrite slabs can be packaged appropriately.
EXAMPLES
The following examples are put forth so as to provide those of ordinary skill in the art with a complete disclosure and description of how the articles, devices and/or methods claimed herein are made and evaluated, and are intended to be purely exemplary of the invention and are not intended to limit the scope of what the inventors regard as their invention. Efforts have been made to ensure accuracy with respect to numbers (e.g., amounts, temperature, etc.), but some errors and deviations should be accounted for.
Example 1
FIG.23 is a block diagram showing exemplary system according to an embodiment of the present invention. The exemplary system 2300 is interfaced with a linear array 2302 having, for example, up to 256 elements. A bundle of micro coax cables 2304 provides transmission of the signals between the array 2302 and the processing unit 2306. The exemplary system further comprises a processing unit.
The processing unit 2306 is partitioned into two major subsystems. The first is the front end 2308, which includes the beamformer, the front end electronics, the beamformer controller and the signal processing module. The second is the computer unit 2310, or back end. The front end subsystem 2308, is concerned with transmit signal generation, receive signal acquisition, and signal processing. The back end 2310, which can be an off-the-shelf PC motherboard, is concerned with system control, signal and image processing, image display, data management, and the user interface. Data can be transferred between the front and back end sub-systems by, for example, a PCI express bus, as is known in the art to one of ordinary skill. The module which processes the receive signals is the receive beamformer, as previously described herein. The subsystem which generates the transmit signals is the transmit beamformer, also as previously described herein. Each channel of the transmit and receive beamformers is connected to a separate element in the array 2302. By altering the delay and amplitude of the individual transmit or receive signals at each element, the beamformer is able to adjust the focal depth, aperture size and aperture window as a function of depth. The exemplary system of FIG. 23 may support one or more various modes of ultrasound operation as are known in the art to one of ordinary skill. These modes are listed in Table 2, below:
Table 2 Modes Supported
B-Mode
M-Mode
PW Doppler
Color Flow (Velocity) Doppler
Power Doppler
Tissue Doppler
2nd Harmonic
Triplex
EKV
ECG triggered imaging
3-D imaging
3-D real-time (4 Hz)
RF Mode
Anatomical M-Mode
System Specifications
Exemplary specifications of the system shown in FIG. 23 may include, for example, those specifications listed in Tables 3, below: Table 3
System Specifications
Figure imgf000050_0001
System Cart
The system or portions thereof may be housed in a portable configuration such as, for example, a cart, including beamformer electronics 2316, a computer unit 2310, and a power supply unit 2312. The user interface includes an integrated keyboard 2318 with custom controls, trackball, monitor, speakers, and DVD drive. The front panel 2320 of the cart has connectors 2322 for connecting an array-based transducer 2302 and mouse physiological information such as ECG, blood pressure, and temperature. The rear peripheral panel 2314 of the cart allows 'the connection of various peripheral devices such as remote monitor, footswitch, and network 2324. The cart has a system of cooling fans 2326, air guides, and air vents to control the heat of the various electronics.
In one embodiment the computer unit 2310 may be an off-the-shelf Intel architecture processor running an operating system such as, for example, Microsoft Windows XP. The computer unit 2310 may be comprised of, for example, an Intel 3 GHz CPU (Xeon Dual Processor or P4 with Hyperthreading); 2 GB DDR memory; PCI Express x4 with cable connector; 100 Mbps Ethernet; USB 2.0; Graphics controller capable of 1024x768x32bpp @ 100 Hz; Audio output (stereo); 2x 120 GB 7200 RPM Hard disk drives (one for O/S + software; one for user data) and 300W ATX power supply with power-factor correction In one embodiment the power supply unit 2312may be comprised of the following: a universal AC line input (100, 120, 220-240 VAC, 50 or 60 Hz), where the AC input is provided by a detachable cable that connects to a system AC input terminal block and has AC distribution using IEC terminal blocks. In one embodiment, the inrush current is limited to 6A or less during the first 100ms of power up. The system cart of FIG. 23, and other embodiments of the invention, is further comprised of system cabling 2328. System cabling 2328 includes a main AC line cord; AC cordage for line filter, circuit breaker, power supply unit; AC cordage inside the power supply unit 2312; a computer unit 2310 power supply cord; monitor power supply cord; DVD drive power supply cord, a fan tray 2326 power supply cord and other power cordages as used in the embodiments according to the invention. System cabling 2328 further comprises instrument electronics cables, which include instrument electronics sub-rack power cable; PCI Express cable; transducer connector cable; mouse information system (MIS) cable; 3D stage cable; standby switch cable; etc. System cabling 2328 further comprises computer cables, which may include video extension cable(s) (VGA, DVI, SVideo, etc.); keyboard/mouse extension cable(s); keyboard splitter; mouse splitter; remote mouse cable; remote keypad cable; remote video cable; USB extension cable(s); Ethernet extension cable; printer extension cable; speaker extension cable, etc. Cooling
Filtered ambient air is provided through the use of fans 2326 to the system cart electronics which include, for example, the beamformer electronics (i.e., the beamformer card cage 2316, power supply unit 2312, and computer unit 2310. The cooling system supports, for example, in one embodiment an ambient operating temperature range of +10 to +35°C, and the exhaust air temperature is kept below 20° C above the ambient air temperature, though other ambient operating ranges are contemplated within the scope of this invention.
Electro-magnetic Interference CEMD Shielding
In one embodiment, the exemplary system is provided with a contiguous EMI shield in order to prevent external electromagnetic energy from interfering with the system operation, and to prevent electromagnetic energy generated by the system from emanating from the system.
The system shielding extends to the transducer cable 2304 and the array 2302, and the transducer connector 2322. The computer 2310 and power supply units 2312 may be housed in separate shielded enclosures within the system. All shields are maintained at approximately ground potential, with very low impedance between them. There is a substantially direct connection between the chassis ground of the system and earth ground. Also, in one embodiment the AC supply may be isolated from the system power supply by an isolation transformer as part of the power supply unit 2312.
Electronics Overview An overview of an embodiment of the electronics for an exemplary system according to the invention is shown in FIG.24. In this view, the exemplary system comprises of a power supply unit 2402, instrument electronics subrack, and computer unit. The power supply unit 2402 distributes both AC and DC power throughout the cart. A DC voltage of, for example, 48V is supplied to the instrument electronics subrack though other voltages are contemplated within the scope of this invention. The instrument electronics subrack houses a beamformer control board 2404, four identical channel boards 2406, and a backplane 2408. The boards 2406 mate with the backplane 2408 via, for example, blind mate connectors. The instrument electronics communicate with the computer unit via, for example, a PCI express connection 2410.
Channel board Exemplary channel boards are shown, and have been previously described, in reference to
FIGS . 18a- 18d. The channel boards 2406 generate the transmit signals with the proper timing for transmit beamforming, and acquiring, digitizing and beamforming the receive signals. In this exemplary embodiment of FIG. 24, there are four channels boards 2406, each containing 16 transmit channels and 16 receive channels Each channel board 2406 also contains 64 front end circuits, including transmit output stages, power supply circuitry, an FPGA for the transmit beamformer, an FPGA to provide the partial sum of the receive beamformer, the beamformer bus and connections to the backplane.
As can be seen in FIG. 18a, four front end circuits are multiplexed to each transmit and receive channel. There is one front end circuit for each element in the array, and each front end circuit comprises a transmit output stage, transmit and receive multiplexer switches, a diode limiter, and components for receive filtering, as previously described in reference to FIGS. 18a- ISd.
The transmit channels and transmit output stages generate bipolar pulses at a specified frequency ranging from about 15 to about 55 MHz, with a specified cycle count and amplitude. The transmit waveforms generated by each channel have a specific delay relative to the other channels with a resolution equal to approximately 1/16 of the period of the transmit frequency. The delay profile across the active transmit aperture is controlled by the transmit beamformer controller. A low jitter master clock is used to generate the transmit burst signals. The transmit output stage includes a means of adjusting the peak to peak voltage on a per channel basis, in order to create an apodized transmit aperture.
The receive channels provide variable gain adjustment, filtering and digitization of the receive signals, and receive beamforming. The gain is implemented with a variable gain amplifier which also acts as the preamplifier. Gain is varied throughout the acquisition of the ultrasound line according to a predetermined gain profile known as the TGC curve. Anti-aliasing filters precede the ADC (analog-to-digital converter) to prevent aliasing and to limit the noise bandwidth. As shown in FIG. 18a, dual ADCs 1807, 1808 are used for each channel, since the signal is acquired as a quadrature signal. The ADC clocks are phased 90° relative to one another. The sampling frequency is set according to the center frequency of the array being used. The 10 bit output of the ADCs is sent to a dual port RAM. The receive beamformer reads the quadrature samples and carries out interpolation filtering according to the dynamic receive focusing scheme which is controlled by the receive beamformer controller. After interpolation filtering, the outputs from each receive channel are summed and then sent to the CPU via the high speed data transfer bus.
The receive beamformer is setup via the RX Control Bus. The transmit beamformer is setup via the TX Control Bus. The control parameters are updated before the start of each ultrasound line. The control parameters are TX aperture, TX delay profile (coarse and fine delay), RX aperture, RX delay profile (initial, coarse and fine delay), RX phase, and RX apodization. When all the control parameters are set and the system is ready - a start-of-line (SOL) signal is sent to begin a transmit/receive cycle.
Transmit output stage
Multiplexing of the transmit channels occurs prior to the transmit output stage. Since the transmit beamformer can work with arrays with up to 256 elements, there are 256 transmit output stages, one per element. As shown and described in reference to FIGS. 20 and 21, each output stage consists of two MOSFETs driving a center tapped transformer, with the supply voltage at the center tap controlling the pulse amplitude. The output waveform is approximately a square pulse with a variable number of cycles. One end of the secondary of the transformer leads to the array element, the other to the receive protection circuit. Reactive impedance elements provide impedance matching and filtering. A FET switch in series with the gate of each MOSFET provides the multiplexing. The transformer and inductors are implemented, for example, as traces on the printed circuit board. There is a ferrite core for the transformer which is inserted into an opening in the board.
Transmit channel
Each transmit channel is multiplexed to four output stages as can be seen in FIG. 18. There are two transmit signals per channel, one to drive each phase of the push-pull output stage. As can be seen in FIGS.20 and 21, the analog section of the transmit channels consist of a push- pull type driver circuit capable of driving the gate capacitance of the output stage MOSFETs with the appropriate rise and fall times. These are multiplexed to the output stages by analog switches.
Transmit beamformer As can be seen in FIG. 22, the transmit beamformer uses DDR memory to produce transmit waveforms clocked at a maximum of approximately 800 MHz. Each channel uses a separate DDR memory output. The output clock rate is about 16X the center frequency (fc), thereby providing capability for the appropriate delay resolution. Jitter is reduced by re-clocking the DDR output with PECL. As can be seen in reference to FIG. 22 A, with a clock rate of about 16 x fc, transmit waveshaping can be effected, by adjusting the width of the positive or negative half cycles. This capability can introduce "dead time" between the positive and negative half cycles to improve the shape of the output pulse. Front End Circuit
For a transducer array comprised of 256 elements, there are 256 front end circuit sections, one dedicated to each array element. As can be seen in reference to FIG. 17, each front end circuit comprises a front end transformer 1702, a transmit output stage 1703, transmit MUX 1708, a receive MUX 1704, a diode limiter, and components for receive filtering.
Receive channel
Also as can be seen in reference to FIG. 17, each receive channel comprises the circuit elements which are involved with the acquisition of the receive signal. The receive multiplexer 1704 connects the 64 receive channels to the elements within the active aperture, which is a subset of up to 64 contiguous elements within the 256 element array.
Receive beamformer
The receive beamformer, such as the one shown in FIG. 17, is a module which independently processes and sums the digital data acquired by each channel in the receive aperture. Its functions may include, for example: dynamic control of the receive aperture size, i.e., the number of channels used during the acquisition of each receive sample; dynamic control of receive apodization, i.e., the window applied to the receive aperture; dynamic receive focusing, i.e., up sampling of the receive signal and the adjustment of the delay applied to each receive channel during the acquisition of each sample, through the use of interpolation filters and variation of aperture position within the array.
Channel board configuration
As shown in the exemplary system of FIG. 24, there are four channel boards 2406, each containing 16 transmit and 16 receive channels, all plugging into a backplane. Each channel board is assigned an address based on its position in the backplane to allow independent control of each board.
Beamformer Control Board
The beamformer control board 2404 of the exemplary system of FIG. 24 provides and uplink of data to the host CPU (back end) and centralized timing and control for the hardware electronics. The link to host CPU is via a PCI express bus 2410, which allows a data bit rate of approximately 250MB/s in each direction per lane. An x8 lane width PCI Express link provides a peak full-duplex bandwidth of approximately 4 GB/s.
The TX/RX controller 2412 provides master timing using start of frame and start of line synchronization signals to the transmit beamformer and receive beamformer. It sets up the beamformer parameters in memory via a custom local bus. All the low-jitter clock frequencies for beamforming are generated on the beamformer control board 2404.
The RF partial sum data from each channel board 2406 is summed 2414 together with synthetic aperture data 2416. Then the ray line data goes into a first-in-first-out (FIFO) memory 2418 where it sits temporarily before being copied to the RF Cine buffer 2420. The RF Cine buffer 2420 stores full frames of RF data and is randomly accessible. Data is read from the RF Cine buffer 2020 and copied to the host CPU via the PCI Express link 2410. Alternatively, the data can be processed by the signal processor module 2422 before being sent to the main computer unit. The data is then buffered, processed further and displayed by the application software and application user interface that runs on the main computer unit.
The data traffic control and reading/writing of control parameters is facilitated by the embedded CPU 2424. The embedded CPU 2424 itself is accessible by the host CPU via the PCI Express link 2410. Other functions provided by the beamformer control board 2404 are the physiological acquisition system and power supply monitoring. FIG. 19, previously referenced herein, is a block diagram of an embodiment of a beamformer control board 1900.
TX/RX Controller Transmit beamformer control:
The transmit (TX) beamformer control updates the transmit beamformer parameters each transmit line. The parameters include number of coarse delay cycles at the transmit center frequency (fc), number of fine delay cycles (at 16 x fc), transmit waveshape (at 16 x fc), number of transmit cycles, transmit select, and transmit voltage. The transmit beamformer control also schedules the updating of parameters for duplex mode, triplex mode, or multiple focal zones.
Receive beamformer control: The receive beamformer control controls the receive delay profile, aperture size and apodization for each channel. The delay control consists of coarse and fine delays, which are controlled by the dual port RAM read pointer and the interpolation filter coefficient selector bit, respectively.
The aperture control signal controls the aperture size dynamically by specifying when the output of each channel becomes active. This is done by controlling the clear signal of the final output register of the interpolation filters. Dynamic receive apodization is controlled by five bits of apodization data with which the signal in each channel is multiplied. The receive control signals are read out from a control RAM at the input sample clock rate as shown in FIG. 26.
Transmit/Receive Synchronization: A block diagram of transmit/receive synchronization is shown in FIG.27. For B-Mode and M-Mode imaging different transmit and receive frequencies can be used. However, line-to- line timing differences (jitter) between the transmit cycle and receive cycle may be introduced because the clocks are asynchronous. A method to synchronize the transmit and receive clocks is to use a programmable divider (TX_Divider) 2714 to generate the receive clock (RXCLK_B) from the transmit clock (TXCLKxI 6) as shown in the embodiment of FIG. 27. The receive frequency is a fixed ratio of the transmit frequency. The ratio is transmit clock frequency times 16 divided by N, where N is an integer. For example, in order to generate a transmit clock frequency of 30MHz and a receive clock (RXCLK_B) frequency of 26.7 MHz, the TX_Divider 2714 is set to divide by 18. Due to the nature of the divider, RXCLK-B is in good phase alignment with TXCLKx 16, and the two clocks always have a minimum phase difference. RXCLK_B is used to synchronize the start of line trigger (SOL) 2702. The synchronized start of line trigger (SOL_S) 2704 generates TXJTRG. TXJTRG is synchronized to TXCLKxδ by TX_TRG SYNC 2716. A delay between SOL_S and TXJTRG can be added if necessary. TXJTRG signals the transmit beamformer to begin a transmit cycle. RXGATE is synchronized to RXCLK_B and signals the receive beamformer to begin data acquisition. A multiplier (RX PLL) 2718 provides the RXCLKx4 clock frequency that is needed by the I/Q Clock Generator 2720 to generate the I and Q clocks.
FIG. 27A illustrates an alternative method of maintaining consistent synchronization between the transmit cycle and receive cycle by delaying the start of line trigger (SOL) 2702 to a point when the phase difference between the transmit clock and the receive clock is at a known state. The SOL trigger 2702 is synchronized by the TX_RX_S YNC pulse. The TX_RX_S YNC pulse is generated by the TX_Sync Timer 2722. The synchronized start of line trigger (SOLjS) 2704 can now start the control timing signals for the transmit beamformer 2706 and receive beamformer data acquisition. TXJTRG is a delayed version of SOL_S that signals the transmit beamformer to begin a transmit cycle. TXJTRG is synchronized to TXCLK. The transmit beamformer 2706 generates the TXGATE multiplexer control signals and TXA/TXB transmit pulses to the front end module. RXGATE signals the receive beamformer to start data acquisition. RXGATE is synchronized to RXCLK 2710. The phase difference between transmit 2708 and receive clocks 2710 is fixed as long as the TX_Sync_Period 2712 is calculated correctly. The TX_Sync_Period 2712 is the minimum number of transmit clock cycles required to achieve synchronization. For example, if the transmit clock frequency is 30 MHz and the receive clock frequency is 25 MHz, TX_Sync_Period 2712 will be 6 cycles of the transmit clock.
Clock Generator
The clock generator 2428 provides the appropriate clock frequencies for transmit and receive beamforming. It comprises a low-jitter master clock, a programmable divider, clock buffers and re-synchronization circuits. The frequencies are: transmit frequency (fc) - 25 to 50
MHz; receive frequency- 20 to 50 MHz in-phase and quadrature; digital clocks - fc x2, x4, x8, xl6. The fastest clock used in this exemplary embodiment can be 800 MHz (50 MHz x 16).
PCI Express Bridge
The PCI Express bridge 2426 connects the host CPU and the embedded CPU 2424 via a PCI bus 2410. This allows DMA transfers from the RF cine buffer 2420 to the host processor memory and vice versa. PCI Express builds on the communication models of PCI and PCI-X buses. PCI Express uses the same memory mapped address space as PCI and PCI-X with single or burst read/write commands. However, PCI Express is a point-to-point serial interconnect using a switch to connect different devices, whereas PCI and PCI-X are parallel multi-drop buses. PCI Express can be used as a chip-to-chip or board-to-board communication link via card edge connectors or over a cable.
The bandwidth of the PCI Express link may be, for example: Uplink - 210 MB/s burst and 140 MB/s sustained rate for RF Data, MIS Data, and diagnostics; Downlink - 20 MB/s burst and <1 MB/s sustained rate for writing control parameters.
Synthetic Aperture FPGA
The partially summed beamformer RF data from the channel boards 2416 is first processed in the synthetic aperture FPGA. The processing comprises beamformer final summation, synthetic aperture and write to FIFO.
RF Cine Buffer
Functionally, the RF cine buffer 2420 is, for example, a 1 GByte dual port RAM. The RF cine buffer 2420 is a random access memory block that stores RF data organized in lines and frames. The data can be input and output at different rates to support asynchronous signal processing. The data stream is made up of interleaved I and Q beamformed data. The FIFO buffer provides storage of the beamformer data while the memory is being read by the CPU for the next display period.
Li one embodiment, buffer specifications may include, for example: Storage - 300 Full Size Frames (512 ray lines x 1024 samples/line x 32 bits I&Q data); Buffer Size - >629 M bytes; Input rate - 140Mbytes/sec; Output rate - 140 Mbytes/sec (RF Data Rate) 32 Mbytes/sec (Video Rate). Asynchronous Signal Processing
According to an embodiment of the described exemplary ultrasound system, it is capable of very high acquisition frame rates in some modes of operation, in the range of several hundred frames per second. The display rates do not have to be equivalent to the acquisition rates. The human eye has a limited response time, and acts as a low pass filter for rapid changes in motion. It has been demonstrated that frame rates above 30 fps have little benefit in adding to perceived motion information. For this reason, displayed ultrasound image information can be processed at a rate of 30 fps or lower, even when the acquisition rates are much higher. To uncouple acquisition from signal processing, a large RF buffer memory is used to store beamformer output data. An exemplary structure for buffering the beamformer output date is shown in FIG.28. As shown in the FIG.28, the memory buffer 2800 can hold many frames of RF data. For a depth of 512 wavelengths, the storage of a full line of 16 bit quadrature sampled RF uses 4K bytes (1024 I,Q samples * 32 bits/pair). With512 raylinesperframe, a IG byte memory buffer can then hold 5122D frames. To keep track of frames written to the buffer, the write controller maintains "first Frame" and "last Frame" pointers, which can be read by the signal processing task, and point respectively to the first frame in the buffer available for reading, and the last frame available for reading
During active acquisition, the beamformer summation output is written by the Write Controller 2802 to the next available frame storage area, which is typically the storage area immediately following that pointed to by the "last frame" pointer. As the data in each new frame is acquired, the "first frame" and "last frame" pointers are updated so that the data is written to the correct address in the buffer. When acquisition is stopped (freeze), the buffer then contains the last N frames, with the "first frame" pointer indicating the oldest frame in the buffer.
The signal processing module 2422 has access to the RF memory buffer 2420. It accesses one acquisition frame at a time, at the display frame rate to produce the displayed estimate data. While the system is scanning, a timer signals the signal processing module that a display frame is required. At that time, the signal processing module 2422 will check to see if a new acquisition RP frame is available, and if so, will read the data and begin processing it. If the acquisition rate is faster than the display rate, the acquisition frames will be decimated prior to processing and display. After the system has been put in freeze, the RF frames stored in the memory buffer can be processed at any desired rate, up to the original acquisition rate.
Signal Processing Module
The beamformer control board 2404 comprises a signal processor 2422 in the datapath to reduce the data load and/or computation load on the host CPU. The processor 2422 may be, for example, a FPGA with a sufficient number of multipliers and memory, or a CPU such as, for example, a 970 PPC or a general purpose DSP. The signal processing functions performed are divided between the signal processing module 2422 on the beamformer control board 2404, and the main computer unit (i.e., host computer). They include post-beamforming gain control, B- Mode amplitude detection and log compression, PW Doppler spectral estimation, color flow clutter filter and frequency/power estimation, asynchronous signal processing or frame averaging. Factors that may be considered in deciding where the processing takes place include the processing speed required, the complexity of the process, and the data transfer rates required.
B-Mode Signal Processing For B-Mode imaging, the signal processing module 2422 performs processes which may include line interpolation, detection and compression.
Color Flow Imaging (CFI") Signal Processing
In one embodiment according to the present invention, Doppler color flow imaging is combined with B-Mode imaging such that the common blocks of the B-Mode signal path and the Doppler color flow signal path are time multiplexed to provide both types of processing.
Typically, the B-Mode lines are acquired in between the CFI ensembles at rate of 1 or 2 lines for each ensemble, depending on the relative ray line densities of B-Mode and CFI (typical CFI images use half the ray line density of B-Mode), as is known to one of ordinary skill in the art.
For CFI, the Signal Processing Module 2422 performs processes that may include: ensemble buffering; clutter filter; velocity estimate calculation; power estimate calculation and variance estimate calculation.
After the I and Q waveforms from the receive beamformer summed output have passed through the clutter filter, the various parameters of the Doppler signal are estimated by a Doppler frequency and power estimator in either the host computer or the CPU 2424 on the beamformer control board. The parameters estimated for each sample depth in the ensemble may include Doppler frequency, Doppler power, and the variance of the frequency estimates. These parameters may be used in a decision matrix to determine the probability that the frequency estimate is a true estimate of the Doppler spectrum, rather than a noise or clutter signal estimate. Color flow velocity estimates are derived from the Doppler frequency estimates. AU of the estimates are derived using a 2-D autocorrelation method as is known to one of ordinary skill in the art.
PW Doppler Signal Processing
Pulsed Doppler acquisition may be either by itself, in duplex mode, or in triplex mode. In duplex mode, the PW Doppler transmit pulses are interleaved with the B-mode transmit pulses so that the B-mode image is updated in real time while the PW Doppler signal is acquired. The method of interleaving depends on the Doppler PRP selected. The components shared between B-Mode imaging and Pulsed Doppler processing are time multiplexed to accomplish both types of processing.
In triplex mode, Pulse Doppler is combined with B-Mode and color flow imaging. The simplest implementation of triplex mode is a time interleaving of either a B-Mode line or a CFI line, in a fixed sequence that eventually results in a full frame of B-Mode and CFI image lines. m this implementation, the PRFs for both Pulsed Doppler and CFI are reduced by half, compared with their normal single modes of operation.
The I and Q samples for each ray line are range gated (a selected range of I or Q signals are separated out from the full range available and averaged to produce a single I5Q pair), to select the region of interest for the Doppler sample volume. The length of the range gate can be varied, if desired, by the user to cover a range of depth. The resulting averaged I,Q pairs are sent to a spectral processor, as well as an audio processor, which converts the I, Q Doppler frequency data to two audio output streams, one for flow towards the transducer (forward) and the other for flow away from the transducer (reverse).
For PW Doppler imaging, the signal processing module 2422 performs processes including range gating (digital integration).
M-Mode Signal Processing
For M-Mode imaging, the signal processing module 2422 performs processes including detection and compression.
EKV Signal Processing
EKV is an acquisition method in which extremely high frame rate images are generated (1000 frames per second and higher) as a post processing operation using ECG (electro- cardiograph) signals used as timing events. EKV imaging may be implemented with either a single element mechanically scanned transducer, or with a transducer array. EKV imaging involves the acquisition of ultrasound lines at a PRF of 1000 Hz or higher at each line position in the 2-D image over a time period. The time period over which ultrasound lines are acquired at each line position, referred to as the EKV Time Period, can be for example, 1 second, which is long enough to capture several cardiac cycles in a mouse or other small animal. The acquisition of each ultrasound line involves the firing of a single transmit pulse followed by acquisition of the returning ultrasound data. For example, if there are 250 lines in the 2-D image, a total of 250,000 ultrasound lines will be acquired in the EKV data set. Each frame of the EKV image is reconstructed by assembling the ultrasound lines which were acquired at the same time during the cardiac cycle.
In one embodiment, the sequence of acquisition of the EKV data set may be such that the ultrasound line position remains static while the ultrasound lines are acquired over the time period. For example, if the time period is 1 second, and the PRF is 1 kHz, 1000 ultrasound lines will be acquired at the first ultrasound line position. The line position can then be incremented, and the process repeated. In this way all EKV data for all 250 lines in the 2-D image will be acquired. The disadvantage of this method of sequencing is that length of time required to complete the full EKV data set can be relatively long. In this example the time would be 250 x 1 second = 250 seconds. In a preferred embodiment which makes use of an array, the method of interleaving allows for a reduction in the length of time required to complete the EKV data set. For example if the PRF is 1 kHz there is a 1 ms time period between pulses during which other lines can be acquired. The number of ultrasound lines which can be acquired is determined by the two-way transit time of ultrasound to the maximum depth in tissue from which signals are detected. For example, if the two-way transit time is 20 μsec, 50 ultrasound lines at different line positions may be interleaved during the PRF interval. If we label the line positions Ll, L2 ... L50, one exemplary interleaving method can be implemented as follows:
Figure imgf000063_0001
The sequence in the above table is repeated until the EKV Time Period has elapsed, at which time there will be a block of data consisting of 1000 ultrasound lines acquired at 50 different line positions, from line 1 to line 50. The acquisition of the block of data is then repeated in this manner for the next 50 lines in the 2-D image, line 51 to line 100, followed by acquisition over lines 101 to 150, etc., until the full 250 line data set is complete.
The total time required for the complete data set over 250 lines is reduced by a factor equal to the number of lines interleaved, which in this example is 50. Therefore the total length of time required would be 5 seconds.
Embedded CPU
The embedded CPU 2422 on the beamformer control board 2404 is, in one embodiment, a 32-bit embedded microprocessor with a PCI interface 2426 and a DDR memory interface. The main function of the embedded CPU 2424 is data traffic control. It controls data flow from the receive beamformer FIFO 2418 to the KF cine buffer 2420, from the RF cine buffer 2420 to the signal processing module 2422, and from the signal processing module 2422 to the host PC.
The beamformer control and diagnostics information is memory mapped on the target PCI device as registers. The embedded CPU 2424 decodes the location of the registers and relays the information over the appropriate local bus. The local bus can be, for example, PCI, custom parallel (using GPIO), I2C serial, or UART serial, as each are known in the art.
Physiological Acquisition System
The physiological acquisition system 2430 (or "mouse acquisition system") filters and converts analog signals from the mouse information system inputs 2438. These signals may include subject ECG, temperature, respiration, and blood pressure. After data conversion, the data is transferred to the embedded CPU 2424 memory via local bus, and then on to the host CPU for display via the PCI Express link 2410.
Power Supply Monitoring;
The beamformer control board 2404 monitors the rack power supply 2432 and lower voltages generated on each board. For example, the rack power supply 2432 may provide +48VDC to the backplane 2408. In one embodiment, two high voltage post regulators 2436 on each channel board 2406 supply the transmit portion of the front end circuit. The beamformer control board 2404 monitors these regulators for over-current or over- voltage situations
Backplane The backplane 2408 mounts to the instrument electronics card cage. In one embodiment it has blind mate edge connectors to allow each of the boards to plug in, though other connection schemes are contemplated within the scope of this invention. It provides interconnection between boards, and input/output connectors for signals outside the card cage. In one embodiment, the size of the backplane is 8U high by 84HP wide so that it may fit in a 8Ux 19" rackmount VME-style card cage. The card cage depth may be 280mm in one embodiment.
System Software
An overview of an embodiment of system software 2330 is shown in FIG.29. Generally, the system software 2330 operates on a processor platform such as, for example, an Intel processor platform running Windows XP operating system. The processor platform of one embodiment of the system is provided by the computer unit 2310, previously described herein. Alternatively, the system software 2330 may be loaded on a standalone workstation for reviewing studies. The workstation does not contain beamformer hardware nor does it have a transducer for acquisition of new data. It can review previously acquired study data and perform a limited set of processing functions. For example, the user may add measurements, playback at different frame rates, or change the color map.
FIG.30 is an embodiment of a main software application that may be used to practice one or more aspects of the present invention. The system software 3000, as shown in FIG. 30, may be loaded when the system powers-up and can provide an interface for an operator of the system.
The framework 3018 which determines the overall structure of the components can be used to produce an application executable by the operating system of the processing platform of the computer unit 2310 and to interface with the operating system. For example, the framework
3018 may produce a Windows application and interface with the Windows operating system.
The application controller 3020 software component can be the state machine for the system software 3000. It may control the interaction between the operator, the system software 3000 and the front end 2308. The application view 3022 software component can provide a foundation to support the presentation of the system software 3000 based on the state machine in the application controller 3020 software component as previously described herein.
The studies component 3002 may allow the operator to perform studies, review study data, edit content, and import or export study data. As previously described herein, there can be various operating modes supported by the system for acquiring data and can be managed by a modes 3004 software component of the system software 3000. The supported modes may include, for example, B-Mode, 3D Mode, M-Mode, PW-Doppler, Color Flow Doppler, etc. Each mode has adjustable parameters, cine loop acquisition, and main image display area, which may be managed by the modes 3004 software component. Some modes may operate simultaneously, e.g., PW-Doppler and B-Mode.
The beamformer control 3024 software component can generate the imaging parameters for the front end based on the settings in the system software 3000.
The user data manager 3006 software component may maintain user preferences regarding how the system is configured. The measurements 3026 software component may allow the operator to make measurements and annotations on the mode data!
The calculations 3028 software component may allow the operator to perform calculation on measurements.
The utilities layer 3008 software component contains common utilities that are used throughout the application as well as third party libraries.
The hardware layer 3012 software component is used to communicate to the beamformer hardware via the PCI Express bus, as previously described herein.
The physiological 3030 software component can be used to control the physiological data collection through the hardware layer 3012 as previously described herein.
The data layer 3010 may contain a database of all the different sets of parameters required for operation. The parameters may be set depending on the current user configuration and mode of operation. The message log 3014 and engineering configuration 3016 may be used for diagnostic reporting and troubleshooting.
Transducer select board
Referring back to FIG.24, it can be seen that in this embodiment according to the present invention that the system can have one transducer connector 2438 on the front of the cart and the user can physically unplug the first transducer and then plug in another when switching transducers. In one embodiment this may be a 360-pin transducer connector 2438. In another embodiment, a transducer select board with two transducer connectors at the front panel can also be used and enables switching between transducers without physically handling the transducers.
Example 2
Another exemplary embodiment of the high frequency ultrasound imaging system comprises a modular, software-based architecture described below and as shown in FIG. 31.
The embodiment of FIG.31 comprises four modules, which are part of a processing unit, for the exemplary system; a beamformer module 3102; an RF buffer memory 3104; a signal processing module 3106; and a system CPU 3108. The beamformer module 3102 comprises the circuitry for transmitting and receiving pulses from the transducer, as well as the delay processing used for beamforming. Its output can be summed RF data or optionally down-converted I and Q data from quadrature sampling techniques. The output of the beamformer module 3102 may be written to a large RF buffer memory 3104, as described herein. The CPU/signal processing module 3106 is responsible for processing the RF data from the beamformer 3102 for image formation, or Doppler sensing. The signal processing module 3106 can comprise a CPU module with the processing tasks implemented in software executing in a general purpose computing environment. Alternatively, the signal processing module 3106 can be implemented with some signal processing functions in hardware or in software executing on dedicated processors, in which case an additional signal processing module can be implemented as a plug-in card to the system CPU 3108.
If a dedicated hardware solution is chosen for the signal processing module 3106, it can be implemented with high performance CPUs. Optionally it can be implemented with digital signal processing chips (DSPs). One type of DSP which may be used is of the floating point variety, as are known in the art, and can be controlled by the host CPU, as well as being "data driven." The system CPU 3108 can act as both a user interface/control system as well as a signal/image processing sub-system. System control information can be distributed using memory mapped I/O, wherein modules interface to the peripheral bus of the CPU module. Optionally, the system CPU 3108 can be physically separate from the beamformer module 3102 and can be connected via a PCI Express cable (or equivalent) 3110. An exemplary PCI Express cable 3110 is one that supports transfers up to 1 GB/sec and lengths of three meters. Some or all of the memory that exists on various modules can be mapped into the CPU's 3108 memory space, allowing for access to parameters and data.
The system CPU 3108 in the exemplary architecture can perform a number of real-time processing tasks, including signal processing, scan conversion, and display processing. These tasks can be managed in a manner that does not require a "hard" real-time operating system, allowing for expected system latencies in the scheduling of processing routines. In addition, the system CPU 3108 can be responsible for managing the user interface to the system, and providing setup and control functions to the other modules in response to user actions. The CPU motherboard and operating system can support multiple CPU's, with fast access to a high speed system bus, and near real-time task management.
Transmit Beamformer
The beamformer module 3102 of this exemplary system comprises a transmit beamformer. The transmit beamformer can provide functions which may include, for example, aperture control through selection of a subset of array elements, delay timing to start of transmit pulse, transmit waveform generation, and transmit apodization control. For this exemplary embodiment, a transducer array 3112 is utilized. In one embodiment, this transducer array 3112 contains up to 256 elements. To eliminate the need for high voltage switching of transmitter pulse drivers to transducer elements, the transmit beamformer component of the beamformer module 3102 may be comprised of a number of transmitters equivalent to the number of transducer array elements. For instance, in an exemplary array transducer having 256 elements the transmit beamformer comprises 256 transmitters. Optionally, the transmit beamformer can comprise less than 256 transmitters and a high voltage switching method to connect an individual transmitter to a specific element. High voltage multiplexers are used to select a linear subset of elements from a 256 element array.
Optionally, the transmit beamformer component of the beamformer module 3102 may comprise high voltage pulser drivers for all 256 elements of the exemplary array, and a switching mechanism which connects a subset of transmit waveform generators to the appropriate drivers/array elements. This optional embodiment uses 256 TXJRX switches for protection of the receiver inputs with low level multiplexing to select the subset of array elements for the receive aperture. The low level multiplexing can optionally be combined with the TX/RX switches and in some cases has less attenuation of the receive signals and faster switching, when compared with a high voltage MUX scheme.
Transmit delays of 1/16 wavelength can be used and provide appropriate focusing and side lobe reduction in the transmit beam. For desired steering and focus control, the maximum delay times, when measured in wavelengths, can be at least 0.7 times the largest transmit aperture. For example, with 128 transmitters and an array spacing of 1.5 wavelengths, the largest transmit aperture is 192 wavelengths. At 20 MHz center frequency, the maximum transmit delay times can be at least 6.72 microseconds.
For 1/16 wavelength accuracy, the highest center frequency of interest specifies the delay resolution. At 50 MHz, this gives a delay accuracy of 1.25nsec, which uses the equivalent of an 800 MHz clock and a 13 bit counter to achieve the maximum delay time of 6.72 usec. Optionally, instead of a high frequency clock a four phase clock at 200 MHz can be used. This would allow selecting a specific transmit delay by selecting one of the four phases of the 200 MHz clock as input to an 11 bit counter, which is preloaded with the number of 200 MHz clocks in the delay time.
The transmit beamformer component of the beamformer module 3102 further comprises a bi-polar transmit pulser. This type of pulser drive is typically specified with three parameters:
Tl, which is a transmit frequency (duration of half cycle); T2, which is a half cycle on time
(duration of either positive or negative half cycle pulse); and T3, which is a pulse duration
(number of half cycle pulses in total transmit). These durations are shown in FIG. 32.
The control of the half cycle pulse duration, T2, allows for closer approximation to a sine wave drive, with improved transducer output. It can also be used to obtain a somewhat crude apodization of the transmit pulse power, provided that sufficiently fine control of the duration is provided.
Transmit apodization can be used to reduce spurious lobes in the transmit beam, which can be either side lobes or grating lobes. Apodization of the transmit aperture results in reduced power output and worse lateral resolution, so it is not always desirable. Often a small amount of apodization capability, such as providing for only a few levels of power output, is sufficient to achieve a good compromise between spurious lobe reduction and lateral resolution. The pulse width modulation scheme mentioned above for transmit waveform generation is one possible means of providing limited transmit apodization. A second method is to provide not one, but possibly four or more levels of high voltage for the pulser drivers, with a means to select one of these levels on each pulser.
Receive Beamformer
The beamformer module 3102 also comprises a receive beamformer component. There are several different receive beamforming implementations which can be used in the exemplary system. The digital methods discussed below have least one A/D converter for each element in the receive aperture. In this exemplary embodiment, the A/D converter bit depth is 10 bits, which gives the desired beamforming accuracy at -5OdB signal levels. The A/D dynamic range is chosen to reduce spurious lobes and thus provide contrast resolution as desired. Eight bit A/D converters can be used if appropriate. Embodiments of the exemplary system use 64 receive channels, combined using synthetic aperture to implement a 128 channel receive aperture for applications where maximum frame rate is not needed. One optional method for digital receive beamforming implementation samples the ultrasound signals from the individual elements at a rate which is at least twice as high as the highest frequency in the signal (often called the Nyquist rate.) For example, a 50MHz, 100% bandwidth transducer the Nyquist sampling rate is 150MHz or higher.
Bandwidth Sampling
Another optional sampling method for the receive beamformer component of the beamformer module 3102 is bandwidth sampling. Sampling theory, as known to one skilled in the art, provides that if a continuous function only contains frequencies within a bandwidth, B Hertz, it is completely determined by its value at a series of points spaced less than 1/(2*B) seconds apart. Sampling a band-limited signal results in multiple copies of the signal spectrum appearing at a fixed relationship to the sampling spectrum. Provided these replicated spectra don't overlap, it is possible to reconstruct the original signal from the under-sampled data. For example, consider a signal with a maximum bandwidth of 20 MHz centered at 30 MHz, and sampled at a rate of 40 MHz. In this situation, the spectrum is replicated as shown in FIG. 32. The original spectrum is replicated in the 0 - 20 MHz portion of the frequency spectrum (it is also reflected about the fs/2 frequency, but this can be accounted for in subsequent signal processing), where the 40 MHz sample rate is adequate to preserve all the information in the signal.
FIG. 33 illustrates bandwidth sampling of 30 MHz signal spectrum, which may be utilized in an embodiment of the receive beamformer component of the beamformer module 3102. S ampling the signal spectrum in FIG. 33 using normal Nyquist sampling requires a sample rate of 80 MHz or higher. Using bandwidth sampling at 3/4 of the wavelength, as described above, transducer center frequencies up to 60 MHz can be managed with 80 mega samples per second (MSPS) 10 bit A/D converters, which are known in the art and are available from several vendors. In the example given above, the signal spectrum had no frequency components outside of the 20MHz bandwidth region (66.7% of the center frequency). In practice, a transducer spectrum often has skirts that can extend beyond the 66.7% bandwidth region, creating overlapping spectra and inaccurate signal reconstruction. These skirts can be dealt with by using a bandpass anti-aliasing filter prior to the A/D converter that keeps the power in the spectral skirts extending beyond the bandwidth limits to a desired level, such as 5-10%.
Quadrature Sampling Another form of bandwidth sampling, known as quadrature sampling, can optionally be used in an embodiment of the receive beamformer component of the beamformer module 3102.
In this sampling method, two samples are taken at 90° phase with respect to the center frequency.
These samples can be repeated at an interval which is consistent with the bandwidth of the signal. For example, if the quadrature samples are taken at every period of the center frequency, the sample rate supports a 100% bandwidth signal. The sample pair resulting from quadrature sampling is not a true complimentary pair, since the samples are taken at different times, however they are true samples of the analytic waveforms, and concurrent quadrature samples can be found by interpolating the samples of the two I and Q sampled waveforms to the same point in time.
Quadrature sampling may be implemented with one high sample rate converter sampling at four times the center frequency or with two lower frequency converters each operating at the center frequency but with clocks differing in phase by 90° with respect to the center frequency
Nyquist Sampling Optionally, yet another form of sampling can be used in the receive beamformer. This form of sampling is Nyquist sampling combined with bandwidth sampling. Normal Nyquist sampling is used for the lower transducer center frequencies and bandwidth sampling for the higher frequencies. Commercially available 10 bit A/Ds with maximum sample rates of 105 MSPS are available. With this sample rate capability, a 30 MHz center frequency transducer with 100% bandwidth can be sampled adequately at Nyquist rates. At 40 MHz, Nyquist sampling can be used for transducers with bandwidths up to approximately 60%, so for this center frequency or higher, bandwidth sampling can be used. If these higher sample rates are used, the beamformer processing circuitry also accommodates the higher clock rates and increased storage requirements.
A variation of quadrature sampling can be used to provide a higher bandwidth beamforming capability for those applications that can benefit from it (for example, harmonic imaging) In this method, two quadrature sample pairs may be acquired for every cycle of the center frequency. For example, consider the sampling of a signal which has a center frequency of 30 MHz and significant spectral content beyond 100% bandwidth, such that the spectrum extends to frequencies less than 15 MHz and/or greater than 45 MHz. Two A/D converters per channel may be used to acquire the RF signal at that channel, each sampling periodically at twice the center frequency, 60 MHz. The sample clock of the second A/D converter is delayed by 1A the period of the 30 MHz center frequency relative to the sampling clock of the first A/D converter. Every second sample acquired by the A/D converters will be multiplied by - 1. The sample stream originating from the first A/D converter will then be the down-converted quadrature (Q) sample stream, and that originating from the second A/D converter becomes the down-converted in- phase (I) sample stream. The fine delay required for receive beamforming maybe implemented by interpolation of the quadrature samples. This method allows for accurate sampling of the RF signal over 200% bandwidth of the center frequency.
In an alternative method of providing higher bandwidth beamforming capability which requires one A/D converter per receive channel, the RF output of the beamformer can be formed using two acquisition pulses, similar to a synthetic aperture approach. For example, consider a 30 MHz signal spectrum with 100% bandwidth, so that the -6dB spectrum extends from 15 to 45 MHz. In this case, the signal can be sampled at a 60 MHz sample rate, and the sign of every other sample flipped, to provide a down-converted sample stream that can be taken as the Q channel of a quadrature down-conversion scheme. On the next acquisition, the sampling clock is delayed by 1/4 of the period of 30 MHz, providing (after flipping the sign of alternate samples) the I quadrature waveform. These two quadrature waveforms are then time shifted and combined after beamforming to reconstruct an RF signal that is accurate for 200% bandwidth of the 30 MHz center frequency. This is adequate to capture all the information from an ultrasound transducer with 100% bandwidth. The frame rate is reduced by half compared with single pulse ray line acquisition. Higher frame rates can be achieved over a region of interest by reducing the number of image lines. hi the exemplary embodiment of FIG. 31, receive beamformer delay implementation is performed using the interpolation method. In this approach to beamforming, the A/D converters all sample concurrently, at a constant sample rate (using bandwidth or quadrature sampling). The delays for steering and dynamic focusing are implemented in two steps: 1) a coarse delay stage that implements a delay which is an integral number of sample clocks cycles, and 2) an interpolation filter that interpolates to 1 / 16 of a wavelength time positions in between the coarse samples. The coarse delay stage performs the function of a programmable shift register, whose maximum length is equivalent to the maximum delay time desired in sample periods. The order of these two stages can be reversed if desired, depending on implementation considerations.
Bandwidth sampling interpolation may be described using the following example. For an exemplary 30 MHz array using bandwidth sampling, the sample rate on all A/D converters can be set to 40 MHz, providing a 66.7% bandwidth. With 128 receive channels, about 10 microseconds is desired for maximum delay, thus implementation uses a programmable shift register of about 400 stages. At 40 MHz, the programmable interpolators need only calculate one of eleven intermediate sample values (for 1/16 wavelength accuracy), equally spaced between adjacent 40 MHz samples. The interpolators can be specifically designed for bandwidth sampling to provide for accurate signal reconstruction. Samples can be taken from the output of all channels' interpolators, and summed to produce the sampled RF waveform for the desired beamforming direction.
The signal reconstruction process for interpolating between bandwidth sampled data points is simplified for the example 30 MHz array given above. In this case, every odd sample can be taken as samples of the Q component of the quadrature baseband representation of the signal (with alternate sign), while even samples can be considered to be samples of the I component. A simple bandlimited interpolator can be used to find the I and Q signal values at the appropriate intermediate time point, which can then be combined to reconstruct the RF value.
If desired, all of the bandwidth sampled data points can be down-converted by the interpolation filters, resulting in a baseband quadrature sampled beamformer output, which can simplify downstream signal processing.
Quadrature sampling interpolation may be described using the following example. Ih this example, the input signals for each channel are assumed to be quadrature sampled, at one quadrature pair per cycle of the transducer center frequency, providing an input bandwidth of 100% around the center frequency. The two samples in the pair are taken at 90 degrees phase difference with respect to the center frequency, which provides actual samples of the Q and I baseband signals, but the waveforms are sampled at different points in time. Before the Q and I data can be combined, this sampling offset is corrected using interpolation filters. The interpolation required for correcting the sample offsets can optionally be incorporated into the interpolation filters used for beamforming.
Since the quadrature sampling method proposed generates baseband I and Q signals, the interpolation filters are operating on these signals, rather than the RF waveforms. The samples for all channels are taken at the same time, which leads to I and Q waveforms with the same phase relative to an RF waveform common to all channels. This is equivalent to using mixers on all channels to derive I or Q signals, where the carrier frequency for the mixers all have the same phase. . However, correct summation of the I and Q samples from different channels can be provided by adjusting the carrier phase on each channel to match the phase of the time delayed echo waveforms. This amounts to a phase rotation of the interpolated I, Q samples according to the interpolation point relative to 0 degrees phase of the RF center frequency period. This rotation can also be incorporated into coefficients of a FIR interpolation filter, to produce a corrected I and Q output from each channel that can be summed coherently.
As way of explanation of the quadrature sampling interpolation beamforming method, one can first consider a simpler conceptual model, rather than an actual implementation. In this model, interpolation will be implemented to 16 separate points over the period of the center frequency , providing 1/16 wavelength accuracy for beamforming. This level of accuracy has been shown to be sufficient to provide no significant degradation of beam profiles. Considering a quadrature sampled waveform as shown in FIG.34, the signal is a sine wave whose frequency is 0.9 times the frequency of sampling (which is, for example, 1 Hz in this instance). The Q samples are shown as 'o's 3402, while the I samples are shown as 'x's 3404. As can be seen from the figure, the Q and I samples are samples of much slower changing waveforms, which represent the baseband Q and I waveforms. The interpolation filters operates on these waveforms, to compute 16 interpolation points per period of the sampling frequency.
Referring to FIG. 34, which shows a quadrature sampled sine wave at 0.9 times the sample frequency. The interpolation points are chosen so that the actual sample values don't fall on an interpolation point. This provides that the filter function inherent in the interpolation filter is applied to all points. The positions of the 16 interpolated points with respect to the Q and I sample points are shown in FIG. 34A. Typically, a four point FIR filter is sufficient for accurate interpolation. To interpolate the points 0-3, between Q and I samples, a window of eight samples can be used, as shown in FIG. 34B.
To interpolate the points 4-15, the window is moved forward by one sample, as shown in FIG. 34C. Using these windows, a set of eight coefficients for each interpolation position can be computed, which when multiplied times the sample values in the window, yields the interpolated I and Q values. In the case of the first window, the interpolated I value would be the sum of the even numbered products (0,2,4,6) while the Q values would be the sum of the odd numbered products (1,3,5,7). In the case of the second window, the I and Q values would be reversed. FIG. 35 is a plot of the interpolated values for the example sine wave given in FIG. 34 over the sine wave of FIG. 34. In the figure, the interpolated points are shown as the dotted lines and start after the fourth sample point, which is the first position that a window can be applied (in this case, window #2).
FIG. 36 is an illustration of a data set for the acquisition of a single ray line of echo information from a linear array, consisting of the quadrature sampled signals from each of the transducer elements over a depth range. This data set can be viewed as an array with depth 3602 along one axis and channel number 3604 along the other. To reconstruct a single range point along the ray line from the data set above, an eight sample window is positioned in each channel's data row at the appropriate sample number, which corresponds to depth, and one of the 16 interpolation points is chosen which provides the exact delay required. As shown in FIG.36, the various channel windows are positioned along a parabolic arc 3606, which corresponds to the curvature of focus needed to reconstruct the range point. The beaniforming parameters for the range point are then defined by providing a starting sample number and interpolation number for each channel included in the aperture. After applying the appropriate interpolation filters to each of the channel data shown above, and I and Q sample is obtained for each channel that corresponds to the appropriate delay for the range point. As previously described herein, these I and Q sample pairs can not be simply summed to derived a beamformed I,Q pair, since the phase of the I5Q sample from each channel is different. Before the I,Q pairs from each channel can be summed, each channel's I,Q pair is phase rotated to correspond the same phase with respect to the delay time implemented. For example, if two channels are receiving an echo return, where the path length difference to the range point corresponds to exactly Vi wavelength of the echo frequency, and these echo returns are sampled at the same times by our quadrature sampling scheme, the samples will fall on different points on the RF signals, and the resulting I,Q waveforms will be 180 degrees out of phase. This situation is illustrated in FIGS. 37A and 37B, in which the reconstruction points on the waveforms of the two channels are indicated by the vertical lines. When the I and Q values at the reconstruction point from the waveforms of the two channels are summed in the beamformer they should add constructively, however, it is apparent that the values are quite different and will not add constructively . To sum the two I and Q values a vector rotation must be performed first.
The amount of rotation is calculated by determining the distance of the reconstruction point from the start of a sample period, which is effectively the interpolation point number times 1/16 wavelength (plus 1/32 of the period, to be precise). This distance can be converted into an angle by taking the fraction of the total period and multiplying by 2*pi. The rotation equations are then given below:
(1) Qr = I*sin(angle) + Q*cos(angle)
(2) Ir = I*cos(angle) - Q*sin(angle)
Using these rotation equations on the interpolated I and Q samples allows the rotated Fs and Q's to be summed coherently. The rotation of the I and Q samples can be incorporated into the 8 coefficients used for interpolation. For example, when using the first interpolation window, where the even samples are I samples, the sin(angle) in equation (1), above, can be multiplied by each of the I coefficients, and the cos(angle) term multiplied by each of the Q coefficients. The resulting FIR filter then provides the rotated Q value, when all product terms are added together. Similarly, another set of coefficients can be used to compute the rotated I value. In this scheme, the FIR filter operates twice per sample period, using different coefficients, to produce an output stream of rotated Q and I values. This stream can be summed with the stream of rotated Q and I values from other channels to produce the beamformer output, which in this case is interleaved
I,Q data representing the down-converted summed RF. Alternatively, the interpolation of the Q and I values may be implemented with separate FIR filters, each with 4 coefficients. In this scheme, the phase rotation is implemented in a stage following the interpolation.
The sampling scheme in which two quadrature pairs are acquired for each period of the center frequency also requires a phase rotation after interpolation of the quadrature samples. In this scheme two A/D converters per channel may be used to acqujre the RF signal at that channel, each sampling periodically at twice the center frequency. The sample clock of the second A/D converter is delayed relative to the sampling clock of the first A/D converter by 1A of the period of the center frequency. Every second sample acquired by the A/D converters will be multiplied by -1. Interpolated values can be calculated for 16 separate points over the period of the center frequency, or for 8 points over the period of the sample clock. The interpolation points calculated over a span of two sample clock periods may be numbered from 0 to 15. The amount of phase rotation required is the interpolation point number multiplied by 2*pi /16. For example, when the interpolation point is located at 1/8 of a sample clock period after the start of odd numbered sample clock cycles, the amount of phase rotation will be 2*pi/l 6. When the interpolation point is located at 1/8 of a sample clock period after the start of even numbered sample clock cycles, the amount of phase rotation will be 2*pi*(9/16). The interpolation points maybe shifted by 1/32 of the center frequency so that the actual sample values don't fall on an interpolation point on order to ensure that the filter function inherent in the interpolation filter is applied to all points. After the phase rotation, the values can be summed to provide the beamformer output. The amplitude of the envelope of the received signal output from a quadrature beamformer may be determined by calculating the square root of the sum of the squares of the I and Q samples. A compression curve may then be applied to the envelope amplitude values. Doppler processing can use the summed I and Q sample stream directly to derive Doppler frequency estimates and/or compute FFT spectral data.
A possible implementation for interpolation filters operating on quadrature samples is described below. In one embodiment the interpolation filters and control logic can be implemented with an FPGA device. As provided above in reference to FIG. 31, receive beamformer delay implementation may be performed using the interpolation method. A high level diagram of a delay implementation is shown in FIG.25. This diagram shows the functions after A-to-D conversion for a single beamformer channel. The outputs of the two A/D converters are multiplexed into a single sample stream at a constant rate of two times the center frequency. For 10 bit A/D converters, we then have a series of 10 bit samples coming from the A/D converters, with the first sample designated as a Q sample, followed by the I sample of the quadrature pair. This stream is the input to the dual port ram 2502 shown in FIG. 25.
At the start of an acquisition line, a write pointer 2504 and a read pointer 2506 in the dual port ram are reset to the top of the ram 2502. As each new sample comes in, the sample is written to the ram 2502 at the address of the write pointer 2504 , which is then advanced to the next sequential location. When the write pointer 2504 reaches the end of the ram 2502, it wraps around to the beginning of the ram 2504 for the next write operation. The dual port rams 2502 are large enough to store samples for the maximum delay required by the steering and focusing needed for the acquisition line. The input side of the dual port ram 2502, with the writing of each new sample and subsequent incrementing of the write pointer 2504, needs no channel unique control mechanism, since all channels can write their input data at exactly the same time and to the same addresses. The output side of the dual port ram 2502 uses independent channel control. FIG.26 illustrates one mechanism for implementing the control signals required for a single channel. In the embodiment of FIG. 26, a control ram's 2602 address is incremented at the input sample clock rate (2X the center frequency, Fc). The ram 2602 then provides a registered output 2604 where each bit provides an independent control signal.
Returning to FIG. 25, it is shown and described how the receive delays are implemented in one embodiment according to the present invention. For echoes returning from a point located along the centerline of the receive aperture, the echo first appears in the signals from the element or elements closest to the center of the aperture, then later in the elements near the outer portion of the aperture. This means that to align echoes in signals from the center and the outer edge of the aperture, the center signals can be delayed a period of time before they can be summed with the signals from the outer edge. In the dual port ram 2502 example, longer delays are achieved by letting the read point 2506 lag further back behind the write point 2504. Therefore, the center channel in the aperture will have the greatest difference between read point 2506 and write point 2504, while the outer channels will have the smallest difference.
For dynamic focusing, the focal point is moved outward along the receive line at the half the speed of sound, so that focal point is always at the location of the echo being received. For a constant aperture, as the focal point moves out in range, the delay between the center and outer channels of the aperture decreases. With dynamic aperture, or constant f (i.e., focal length divided by the aperture size) number operation, the delay between inner and outer channels increases until the maximum aperture is reached, then the delay decreases.
Using dynamic aperture and dynamic focus with the dual port ram delay scheme, yields the following operation of the dual port ram pointers 2504, 2506: The center channel is delayed by the maximum delay amount (the amount for the full aperture) by letting the write pointer 2504 move ahead of the read pointer 2506 until the delay is achieved. At that point, the read pointer 2506 is moved ahead at the same rate as the write pointer 2504. An outer channel's initial delay is set by letting the write pointer 2504 move ahead of the read pointer 2506 by the appropriate amount. This initial delay offset can be less than the offset of the read 2506 and write pointers 2504 of the center channel. At this point, the read pointer 2506 is moved ahead at the same rate as the write pointer 2504 until the channel is made active in the aperture. After a channel is made active in the aperture, its delay gradually increases with time to approach that of the center channel. This is accomplished by occasionally not moving the read pointer 2506 ahead when the write pointer 2504 is moved. This increases the offset between the read 2506 and write pointer 2504 gradually with time.
The above operation can be directed with only two binary state control signals as shown in FIG.26A. The first signal is a read pointer advance enable (RPE) 2600, which allows the read pointer 2506 to advance concurrently with the write pointer 2504. When this signal is true at the time of the Fc * 2 sample clock, the write pointer is advanced after the data is written to the dual port ram 2502, and the read pointer 2506 is advanced at the same time. When the signal is false, the write pointer 2504 is advanced following a write operation, but the read pointer 2502 remains the same. The RPE control signal 2606, 2606a is used not only to set the initial delay of a channel, but also to implement the dynamic focus coarse delays.
The second control signal (CE) 2608, 2608a merely specifies when the channel's output becomes active, so that it participates in the summation of all active channels. This can be accomplished by the CE signal 2608, 2608a controlling the 'clear' input of the final output register of the interpolators. A channel is made active in the aperture according to when its element sensitivity pattern allows it to receive the returning echoes with less than some threshold amount of attenuation. This time must be consistent with the initial delay time implemented by the first control signal. It should be noted that the CE signal 2608 specifies the time a channel becomes active in terms of the number of quadrature samples from the start of the acquisition line. This is because when a channel first participates in the sum of channels, it must contribute a quadrature pair. In the case of the Fc * 2 sample clock, there are two clocks for every quadrature sample pair.
FIG.26 illustrates the control signals as they might appear for a center element (2606 and 2608), and an element at the outer edge (2606a and 2608a) of the full aperture. However, with an even number of channels/elements, there is no actual center element, since the center of the aperture falls between two elements.
For the center channel, RPE 2606 is held low for the maximum delay time needed. This allows the write point 2504 to move ahead while the read point 2506 stays put. After the delay time is reached, RPE 2606 is set high (true) to allow the read pointer 2506 to advance at the same rate as the write pointer 2504. Since there is no dynamic focus required for the center channel, RPE 2606 remains high for the remainder of the acquisition line. The center channel CE signal 2608 brings the channel active shortly after the delay time is reached. The offset is to allow the shift register and register used for the interpolation filter to fill. The CE signal 2608 then removes the clear on the output register so the channel's data can enter the summation bus.
For the outer channel, RPE 2606a is held low for only a short time, since its initial delay is much shorter than the center channel. Then RPE 2606a is set high, allowing this delay to be maintained until the channel is made active. At that time, the RPE signal 2606a is set low for a single clock cycle occasionally to implement the dynamic focus pattern. The CE signal 2608a removes the clear on the output register when the channel can participate in the summation.
Referring back to FIG. 25, the interpolation filters provide the fine delay resolution for beamforming. There are 16 interpolation points per wavelength of the center frequency, providing 1/16 lambda delay resolution. For each interpolation point, two eight point FIR filters are applied - one to generate the analytic signal I sample, and the other to generate the Q sample. This means that the interpolation filter operates twice per period of the center frequency, or at the Sample CIk (Fc * 2) rate. The I and Q samples are output in succession to the output register, which if enabled, feeds the samples into the summation bus.
The input for the interpolation filters comes from the read address of the dual port ram 2502, which typically advances by one sample (I or Q) for each Sample CIk (Fc*2). When a read is performed of the dual port ram 2502, the sample is input to an eight sample shift register 2508, which holds the last eight samples read. If the read operation of the dual port ram 2502 is not enabled (RPE low), then no data enters the shift register 2508, and the read pointer 2506 is not advanced. The shift register 2508 still holds the last eight samples, and no samples are lost when the read pointer 2506 does not advance; the read pointer 2506 simply falls further behind the write pointer.
Every two sample clock cycles, the samples in the shift register 2508 are transferred in parallel to the inputs of the interpolation filter multipliers 2510. There they remain for the two multiply/accumulate operations that generate the I and Q outputs. When no dynamic focusing is occurring, the samples moved to the multiplier inputs shift forward in time by two samples for each center frequency period. The filter then outputs an I and Q sample for each period of the center frequency. With dynamic focus, occasionally the read cycle of the dual port ram is disabled, and the samples moved to the multiplier inputs shift forward by only one sample. This allows the interpolation point to move forward in time by less than a full period of the center frequency. With dynamic focus on an outer channel, the interpolation point is gradually moving back in time, towards the same time as the center channel.
The coefficients used by the interpolation filters are stored in a small ram 2512, which can be loaded by the system CPU. The ram 2512 can hold 32 sets of coefficients, 16 for the I interpolation point and 16 for the Q interpolation point. The coefficients are selected by five address lines, four of which are control lines that come from the control ram 2602. These four lines must provide a new address every other sample clock (Fc*2). The other line selects the I or Q coefficient set for the interpolation point chosen, and can be toggled with the operation of the filter, producing an I and Q sample every period of the center frequency. Finally, the output register 2514 for the interpolation filter holds the output samples before they enter the summation bus. This register's clear input is controlled by the CE control line. This allows a channel to be disabled from contributing to the sum bus until the interpolation output is valid.
Another way to implement the interpolation filters, phase rotation and dynamic apodization is shown in FIG.25B. In this figure, all digital circuit elements in the upper box 2520 which require a clock are clocked at the receive frequency clock. All digital circuit elements in the lower box 2522 which require a clock are clocked at twice the receive frequency clock. The input FQ data from the analogue to digital converters (ADCs) 2524, 2526 are written to separate FFOs 2528, 2530. The samples output from the ADCs 2524, 2526 may undergo an offset correction in which a predetermined constant value is added. The samples from the output of the ADC offset correction stage 2524, 2526 are stored simultaneously into the FIFOs 2528, 2530, so the writing of the new sample into the FIFOs does not require separate timing logic. AU the channels share the same write enable signals. The read side of the FIFO of each I and Q channel 2528, 2530 uses independent read enable signals 2532, 2534, controlled by receive delay signals generated by the Beamfoπner Controller.
The start of the read enable signals 2532, 2534 of each FIFO is delayed by a number of receive clock cycles equal to the initial coarse delay value 2536 required for each channel. If the read enable signal 2532, 2534 is held low while data is written into the FIFO 2528, 2530, the read out of the FIFO will be suspended and the coarse delay 2536 will increase. When the read enable signal 2532, 2534 goes high, the coarse delay 2536 that is applied remains constant. To align echoes in the signals from the center and the outer edge of the aperture, the center signals will be delayed a period of time before they can be summed with the signals from the outer edge. The delay value for sampled data at the center of the aperture is greater than that of the outer edges. Dynamic receive focusing requires a control signal DF 2538 which goes high when the interpolation filter index 2540 needs to be changed. The interpolation filter index 2540 is a modulo 16 number ranging from 0 to 15. The interpolation filter index 2540 will decrease when the interpolation point has shifted by 1/16 wavelength. When the interpolation filter index 2540 decreases from 0 to 15, the FIFO read enable signal 2532, 2534 will go low for one clock cycle, to increase the coarse delay 2536 by one.
The fine delay is implemented by interpolation. In this example, the interpolation filters are implemented as systolic FIR filters with 4 taps 2542, 2544, 2546, 2548. There are 16 sets of coefficients for the 16 interpolation points. Each interpolation point has 4 coefficients 2550, 2552, 2554, 2556. By interleaving the I and Q samples and operating the filter at twice the receive clock frequency, the same interpolation filter can be used for both the I and Q samples. Different sets of coefficients are used for the I and Q interpolation, since the I and Q samples acquired by the ADCs are sampled at different points in time but are interpolated to the same point in time. To correct the sampling offset, the interpolation filter index for the Q samples will be offset from that of the I samples by 4. The coefficients used in the interpolation filter can alternate between I coefficients and Q coefficients by switching the address of the RAM 2558 which stores the coefficients. The interpolation filter indices are represented by the address counters 2560 for the coefficients. The address counters 2560 for the I and Q coefficients decrease by one when the DF signal 2538 goes high for one clock cycle. The output of the interpolation filter 2560 is Ϊ/Q interleaved.
The interpolated signals are fed to the phase rotation stage 2564, 2566 shown in Figure 25B. There are two multiplier/accumulate elements in the phase rotation circuit. One is used to generate Qr = I*sin(angle) + Q*cos(angle) 2568 and the other to generate Ir = I*cos(angle) - Q*sin(angle) 2570. Sine and cosine coefficients are stored in RAMs as look-up-tables 2572, 2574. There are 16 sets of sine and cosine values. The addresses of the cosine and sine look up tables (LUT) 2572, 2574 are updated at the same time as the interpolation filter coefficients 2550, 2552, 2554, 2556. The phase rotation circuit 2564, 2566 also operates at twice the center frequency. Every second operating cycle produces a pair of valid Ir and Qr data.
For dynamic apodization, the outputs of the phase rotation 2568, 2570 are multiplied by a factor which is dynamically changed during receive. Also if the multiplication factor in a channel is set to zero, the channel does not contribute to the aperture. This way, dynamic aperture updating is achieved. I and Q samples are interleaved through a multiplexer (MUX) 2572 to a common multiplier, which reduces the multiplier resources required. Multi-line Beamforming with Interpolation Filters
The use of interpolation filters for beamforming allows multi-line scanning. In multi-line scanning, several receive lines are reconstructed in the same transmit beam, as shown in FIG.38. The transmit beam is typically broadened with a large depth of field to cover the region where the receive lines will be acquired.
Since the adjacent receive scan lines in a multi-line scan have only small changes in the individual delays for each channel, the interpolation filter delay implementation allows all lines to be processed concurrently. This method works with bandwidth sampling, where the interpolation filters can be operated at a rate higher than the sample rate, as is shown in the exemplary conceptual implementation of the interpolation filter method for an individual channel in FIG. 39.
In FIG. 39, the digital samples from an individual receive channel's A/D converter are sent through a variable length shift register 3902 to implement a coarse delay of an integer number of samples. The output of the variable length shift register 3902 is then sent to a second shift register 3904, where the individual shift stages can be accessed. When this second shift register 3904 is filled, the interpolation filter can operate on a subset of samples, which for the example shown is eight samples. The interpolation filter provides the fine delay for 1/16 wavelength or better resolution. In the example above, the interpolation filter provides an interpolated sample1 between cells 4 and 5 of the filter shift register.
For 3-1 multi-line scanning, as shown in FIG.40, the interpolation filter is operated three times for every sample shift, hi the example of FIG. 40, the filter window is offset from the nominal position by one sample backwards for the first receive line, and one sample forward for the third receive line. In reality, there may be less than a sample difference in the delay values for the adjacent lines, requiring that all lines use the same filter window. The position of the filter windows for each line is programmable. In situations where the delay differences are greater than one sample, the filter shift register can be expanded to allow greater separations between windows. For bandwidth sampling, where there are only one or two samples per wavelength, the filter windows would often not need to be separated by more than one sample period. The output of the filter operations, as shown in FIG.40, is time multiplexed into a single output stream. This stream is summed with the contributions from other channels to produce the beamformer output. Note that for 3-1 multi-line the summation circuitry is capable of operating at three times the sample rate. The summation output of the beamformer can then be de- multiplexed to generate the three multi-line receive lines for downstream processing. The downstream processing is capable of processing three lines in the acquisition time of a single ray line.
In the exemplary receive beamforming methods described above, the output is a digital data stream of samples representing the sampled RF data along a reconstruction line. This stream is derived by summing the data samples from all receive channels that participate in the receive active aperture. The RF data stream can be captured in a buffer with sufficient storage to hold an entire ray line. This same buffer can be used for synthetic aperture acquisitions, and can be summed with the RF data from the second half of the receive aperture as it exits from the summation circuitry.
For Nyquist or bandwidth sampling schemes with no down-conversion, the summed RF data stream exits the beamformer as a raw RF stream. This data stream can be converted to a different format using a pair of complimentary 90 degree phase difference filters, often referred to as Hubert transform filters. These filters effectively band-pass the RF signal and down-convert it at the same time to baseband quadrature data streams. These baseband I and Q data streams can then be combined to provide echo amplitude data for 2D imaging, or processed further for Doppler blood flow detection. The Hubert transform filters can also be used to selectively filter and process a portion of the received signal spectrum, as is needed for harmonic imaging, or frequency compounding. In the case of frequency compounding, the filters can be time multiplexed to produce interleaved output samples from different frequency bands of the spectrum.
Referring back again to FIG. 31, the beamformer module 3102 can also comprise a beamformer control. To orchestrate the events to form a complete image frame, the beamformer uses some sort of controller. The controller can be implemented as a simple state machine, which specifies a series of beamformer events. Each beamformer event can specify a transmit action, a receive action, and/or a signal processing action. Transmit actions specify all the parameters associated with transmitting pulses from the array. These include the duration of connection of the pulsers to the desired elements in the array, the delay times of each pulser, the transmit waveform characteristics, and the transmit aperture apodization function. Receive actions specify all parameters associated with receiving and beamforming the returning echoes. These include specification of the elements connected to the receive channels, the TGC waveform to be used for each channel, the A/D converter sample rates, and the dynamic aperture, steering and/or dynamic focus patterns to be used in the reconstruction process. Finally, the signal processing actions specify what to do with the summation output, such as buffering it for synthetic aperture or sending it to the Hubert transform filters. The Hubert transform filters are specified to perform whatever action is needed for the beamformer event.
As is apparent from the above description, the control of the beamforming process can be complex, and a method of handling this complexity is to encode all the information prior to realtime scanning in memory blocks used to control the hardware. The beamformer controller's task is then reduced to 'pointing' to the appropriate portion of the memory block to retrieve the information needed for a beamformer event. Setting up the beamformer for a specific mode of operation is then accomplished by loading all the memory blocks with parameter information, then programming the various beamformer events with their respective pointers into the controller's state machine. To perform the scanning mode, the controller is then told to run, and steps through the events for an entire frame of acquisition data. At the end of the frame, the controller looks for a stop signal, and if none is found, repeats the whole sequence again.
Embodiments of the exemplary ultrasound system are capable of very high acquisition frame rates in some modes of operation, in the range of several hundred frames per second or higher. Just as with other embodiments according to the present invention , exemplary embodiments process displayed ultrasound image information at 30 fps or lower, even if the acquisition rates are much higher through the use of asynchronous processing as described in reference to FIG.28. It is to be appreciated, however, that for Nyquist sampled data, the storage is increased by 50 - 100%.
Also as previously described, the signal processing hardware/software has random access to the RF memory buffer, and accesses the RF data from a single acquisition frame to produce the displayed estimate data. In this exemplary embodiment, the maximum frame rate for signal processing and display is 30fps, which is typically set by a timer, which signals the signal processing task every l/30th of a second. When processing of a new display frame is complete, the signal processing/display task waits for the next 1/30 of a second time tick. At that time, the signal processing task reads the 'Last Frame' pointer from the Write Controller to see if a new frame is available. If the 'Last Frame' pointer has not advanced from the previously processed frame, signal processing does nothing, and waits for the next 1/30 of a second tick. If the 'Last Frame' pointer has changed, signal processing begins on the frame indicated by the pointer. In this manner, signal processing always starts on a 1/30 second tick, and always works on the most recent frame acquired. If acquisition is running much faster than 30fps, then the 'Last Frame' pointer will advance several frames with each signal processing action. After the system has been put in freeze, the RF frames stored in the memory buffer can be processed at any desired rate, up to the original acquisition rate. One simply calculates how many RF frames to advance in 1/30th of a second, which is computed as a floating point value that can vary from a fraction less than one to as many frames as occurred in l/30th of a second during real time acquisition. With each l/30th of second tick, signal processing accumulates the frame advance value, until an integer boundary, is crossed. At that time, signal processing processes the frame which is that integer boundary number of frames ahead of the last frame it processed.
Synthetic aperture beamforming is also supported by this memory buffer scheme. In this case, the various lines which make up the synthetic aperture are acquired into the memory buffer sequentially, so that the size of an RF storage frame increases. This is simply a different parameter for the Write Controller, which keeps track of how many lines are written per acquisition frame. For readout, signal processing then combines the multiple RF lines in a synthetic aperture to produce the final result. The RF data for cineloop playback also provides for re-processing the data in different ways, bringing out new information. For example, the wall filters for color flow imaging can be changed during playback, allowing optimization for the specific flow conditions. Second, for the researcher who wants to work with RF data, the buffer memory can dumped to an external storage device, providing multiple frames of RF data to analysis. Finally, as a diagnostic tool, the buffer memory can be loaded with test RF data from the CPU, allowing debug, analysis and verification of the signal processing methods.
For the Nyquist sampled beamforming method, down-converted quadrature sampled data is derived from the RF data for amplitude detection and Doppler processing. This can be obtained with complimentary phase FIR filters that are designed to have a 90 degree phase difference over the frequencies in the pass band. These filters can also down-convert the sample stream to a lower sample rate, provided the output sample rate is still sufficient to sample the range of frequencies in the signal. To provide down-converted output samples, the filters operate on RF data that is shifted by an integral number of cycles of the center frequency of the spectrum. Alternately, different filters can be designed for non-integer number of cycle shifts to obtain smaller decimation ratios. A schematic design of an exemplary Hubert filter, as are known in the art by one of ordinary skill, is shown in FIG.41.
The filters are designed by first computing a low pass filter designed using a windowing method. The filter length should be around 40 taps to insure a good response over a broad range of frequencies, and should be a multiple of the number of samples in the period of the center frequency of the RF data. For example, if the sample rate is 120MHz and the center frequency is 30MHz, there are 4 samples in the period of the center frequency and an appropriate filter length would be 40 taps (10 periods). The low pass coefficients are then multiplied by a sine and cosine function, whose frequency matches the center frequency. In the 3 OMHz example, each period of the sine and cosine function has 4 samples.
To obtain down-converted samples, the filters are applied on samples that are shifted by an integral number of cycles of the center frequency. In the 30MHz center frequency case (sampled at 120MHz), the RF samples are shifted by 4 samples at a time, leading to a decimation ratio of 4 to 1. With this decimation ratio, the input signal is restricted to 100% bandwidth, otherwise, aliasing of the output samples will result.
To obtain smaller decimation ratios, the filters can use alternate coefficient sets to preserve the phase information. In the 30MHz example, to achieve a decimation ration of 4 to 2, two sets of coefficients are used - one for 0 degrees phase, and another for 180 degrees phase. These alternate coefficient sets are obtained by sampling the sine and cosine at the appropriate phase increments before multiplying with the low pass filter coefficients. In this case where the shift between output samplers is 1A the period of the center frequency, a simple method to provide the decimation rate is to leave the coefficients the same, and invert the sign of the filter output for the 1/2 period increments. The pass band characteristics of the filters can be modified using different windowing functions. This may be desirable in applications such as harmonic imaging or tracking filters. Frequency compound can be achieved without additional filters for high decimation ratios, provided that the filters can operate at the input sample rate. For the 30MHz example, two filters can be used with different center frequencies that operate on RF data at two sample shift increments. The filter block output a different filter result every two samples. The two interleaved I,Q samples from the different filters are then detected and summed together to produce a 4 to 1 decimated detected output.
Example 3 An embodiment of the exemplary system interface to an array with up to 256 elements may be used to obtain ultrasound images. Table 4 shows exemplary depth range, field of view, frame rate in B-Mode and frame rate in color flow imaging (CFI) for acquiring images. These operating parameters can be used for the particular small animal imaging application described in the far left column. As would be clear to one skilled in the art, however, other combinations of operating parameters can be used to image other anatomic structures or portions thereof, of both small animal and human subjects.
A small animal subject is used and the animal is anesthetized and placed on a heated small animal platform. ECG electrodes are positioned on the animal to record the ECG waveform. A temperature probe is positioned on the animal to record temperature. The important physiological parameters of the animal are thereby monitored during imaging. The anesthetic used may be a for example isoflourane gas or another suitable anesthetic. The region to be imaged is shaved to remove fur. Prior to imaging, an ultrasound conducting gel is placed over the region to be imaged. The ultrasound array is placed in contact with the gel, such that the scan plane of the array is aligned with the region of interest. Imaging can be conducted "free hand" or by mounting the array onto a fixture to hold it steady.
B-Mode frame rates are estimated for the different fields of view indicated in Table 4. Higher frame rates are achievable with reduced field of view. Color flow imaging (CFI) frame rates are estimated for the indicated color box widths, with line density one-half that of B-mode, and with the B-mode image acquired concurrently.
Table 4
Exemplary Depth Range, Field Of View, Frame Rate In B-Mode And Frame Rate hi Color Flow Imaging (CFI) For Acquiring Images
Figure imgf000087_0001
Unaliased velocities measurable with a 150 KHz PRF, for various center frequencies and angles are shown in Table 5 for Pulsed Wave (PW) Doppler. Table 5
Unaliased Velocities Measurable With 150 KHz PRF, for Various Center Frequencies and Angles
Figure imgf000088_0001
A mouse heart rate may be as high as 500 beats per minute, or about 8 beats per second. As the number of frames acquired per cardiac cycle increases, the motion of the heart throughout the cardiac cycle can be more accurately assessed. The frame rate should be at least 10 frames per cardiac cycle, and preferably 20 for better temporal resolution. Therefore, in one embodiment frames are acquired at a rate of at least 160 frames per second, with a field of view large enough to include a long axis view of the mouse heart and surrounding tissue (10-12 mm). For example, using a 30 MHz linear array, the frame rate for a 12 mm field of view is about 180 frames per second. For smaller fields of view, the frame rates used are higher; (e.g., for a 2 mm field of view, with the 30 MHz linear array frame rates of over 900 frames per second can be used for viewing rapidly moving structures such as a heart valve).
The maximum velocities present in the mouse circulatory system (in the aorta) maybe as high as 1 m/s in normal adult mice, but in pathological cases can be as high as 4-5 m/s. To acquire and display unaliased PW Doppler signals from the mouse aorta, the Pulse Repetition Frequency (PRF) for PW Doppler must be relatively high. In the exemplary system, PW Doppler mode PRFs as high as 150 KHz are used, which for a center frequency of 30 MHz and a Doppler angle of 60°, allows for unaliased measurement of blood velocities of 3.8 m/s.
The frame rate for B-Mode Imaging is determined by the two-way transit time of ultrasound to the maximum depth in tissue from which signals are detected, the number of lines per frame, the number of transmit focal zones, the number of lines processed for each transmit pulse and the overhead processing time between lines and frames. Images obtained with different transmit focal zone locations can be "stitched" together for improved resolution throughout the image at the expense of frame rate, which will decrease by a factor equal to the number of zones.
Selection of lower or higher transmit center frequencies for increased penetration, or increased resolution, either user selectable or automatically linked to transmit focal zone location. Multi- line processing, which involves the parallel processing of ultrasound lines, can be used to increase frame rate.
PW Doppler features include a PRF range from about 500 Hz to about 150 KHz, alternate transmit frequency selection, the selection of range gate size and position, the selection of high- pass filter cut-off, and duplex mode operation in which a real-time B-Mode image is displayed simultaneously with the PW Doppler mode may be the same as the transmit frequency used in B- Mode, or it may be different. The ability to steer the PW Doppler beam is dependent on the frequency and pitch of the array used, and the directivity of the elements in the array, as would be appreciated by one skilled in the art. For an array with a pitch of 75 microns and operating in PW Doppler mode at a transmit frequency of 24 MHz, the beam may be steered up to approximately 20°. For this array, larger steering angles would result in unacceptably large grating lobes, which would contribute to the detection of artifactual signals.
Color flow imaging (CFI) can be used to provide estimates of the mean velocity of flow within a region of tissue. The region over which the CFI data is processed is called a "color box." B-Mode data is usually acquired nearly simultaneously with the Color Flow data, by interleaving the B-Mode lines with Color Flow lines. The Color Flow data can be displayed as an overlay on the B-Mode frame such that the two data sets are aligned spatially. CFI includes a PRF range from about 500 Hz to about 25 to 75 KHz, depending on the type of array. With 40 MHz center frequency and 0° angle between ultrasound beam axis and velocity vector, maximum unaliased velocity will be about 0.72 m/s. Beam steering can depend on the characteristics of the array, (specifically the element spacing), the transmit frequency, and the capabilities of the beamformer; e.g., steering may not be available at the primary center frequency, but may be available at an alternate (lower) frequency. . For an array with a pitch of 75 microns and operating in CFI mode at a transmit frequency of 24 MHz, the beam can be steered up to approximately 20°. Larger steering angles would result in unacceptable grating lobe levels. Color flow imaging features can include the selection of the color box size and position, transmit focal depth selection, alternate frequency selection, range gate size selection, and selection of high pass filter cut-off. Power Doppler is a variation of CFI which can be used to provide estimates of the power of the Doppler signal arising from the tissue within the color box. Tissue Doppler mode is a variation of CFI in which mean velocity estimates from moving tissue are provided. Multi-line processing is a method which may be applied to the CFI modes, in which more than one line of receive data is processed for each transmit pulse transmitted.
The beamformer may be capable of supporting modes in which 2-D imaging and Doppler modes are active nearly simultaneously, by interleaving the B-Mode lines with the Doppler lines. 3-D imaging, as known to one of ordinary skill in the art, utilizes mechanical scanning in elevation direction.
Throughout this application, various publications are referenced. The disclosures of these publications in their entireties are hereby incorporated by reference into this application in order to more fully describe the state of the art to which this invention pertains.
Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; and the number or type of embodiments described in the specification. It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims

What is claimed is:
1. An ultrasound imaging system, comprising: an arrayed ultrasonic transducer having a plurality of elements for transmitting into a subject a transmitted ultrasound signal at a transmitted center frequency of up to at least 55 megahertz (MHz); a signal processing unit operatively connected with said arrayed ultrasonic transducer, wherein said signal processing unit is further comprised of, a digital transmit beamformer subsystem comprised of one or more field programmable gate arrays (FPGA) each having an FPGA clock frequency (FPGA fc) and said digital transmit beamformer subsystem having a delay resolution time of at least [1/(2 x FPGA fc)] or greater, a receive beamformer subsystem, a front end electronics module, a beamformer control module, a signal processing module, and a computer unit; wherein said signal processing unit is adapted for acquiring a received ultrasound signal having a frequency of at least 15 MHz from said arrayed ultrasound transducer having a plurality of elements.
2. The ultrasound imaging system of claim 1, wherein said digital transmit beamformer subsystem further comprises a parallel to serial converter having a double data rate (DDR) output, wherein said digital transmit beamformer subsystem is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [1/(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into up bit words that are converted to a serial bit stream by said parallel to serial converter.
3. The ultrasound imaging system of claim 2, wherein said digital transmit beamformer subsystem is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [1/(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into up 16 bit words that are converted to a serial bit stream by said parallel to serial converter.
4. The ultrasound imaging system of claim 2, wherein the transmitted ultrasound signal is comprised of a positive transmit pulse having a positive pulse width and a negative transmit pulse having a negative pulse width, and said positive pulse width and said negative pulse width are independently adjustable.
5. The ultrasound imaging system of claim 4, wherein said transmitted ultrasound signal is comprised of a plurality of positive half- wave cycle sections and each positive half- wave cycle section is comprised of at least one said positive transmit pulse and said transmitted ultrasound signal is further comprised of a plurality of negative half-wave cycle sections and each negative half-wave cycle section is comprised of at least one said negative transmit pulse, and each positive transmit pulse is adjustable in duration for each positive half-wave cycle section and each negative transmit pulse is adjustable in duration for each negative half- wave cycle section.
6. The ultrasound imaging system of claim 5, wherein each fine delay of said at least one said positive transmit pulse within each positive half-wave cycle is adjustable and each fine delay of said at least one said negative transmit pulse within each negative half-wave cycle is adjustable.
7. The ultrasound imaging system of claim 4, wherein the transmitted ultrasound signal is comprised of a number of cycles transmitted, and said number of cycles transmitted is adjustable.
8. The ultrasound imaging system of claim 1, wherein said front end electronics module is constructed as a replaceable plug-in module.
9. The ultrasound imaging system of claim 1, wherein the signal processing unit is adapted for acquiring the received ultrasound signal from the arrayed ultrasonic transducer and said arrayed ultrasonic transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
10. The ultrasound imaging system of claim 1, wherein the transmitted ultrasound signal has a transmitted center frequency of about 15 MHz up to at least 55 MHz.
11 The ultrasound imaging system of claim 1, wherein the transmitted ultrasound signal has a transmitted center frequency of about 15 MHz up to about 55 MHz.
12. A signal processing unit for an arrayed ultrasound imaging system comprising: a digital transmit beamformer subsystem configured to operate up to at least a 55 MHz transmit center frequency, wherein said digital transmit beamformer subsystem is further comprised of one or more field programmable gate arrays (FPGA) and each having an FPGA clock frequency (FPGA fc) and said digital transmit beamformer subsystem having a delay resolution time of at least [1/(2 x FPGA fc)] or greater; a digital receive beamformer subsystem; a front end electronics module; a beamformer control module; a signal processing module; and a computer unit, wherein said signal processing unit is configured to acquire a received ultrasound signal from an arrayed ultrasound transducer having a plurality of elements.
13. The signal processing unit of claim 12, wherein said digital transmit beamformer subsystem further comprises a parallel to serial converter having a double data rate (DDR) output, wherein said digital transmit beamformer subsystem is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [1/(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into bit words that are converted to a serial bit stream by said parallel to serial converter.
14. The signal processing unit of claim 13, wherein said digital transmit beamformer subsystem is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [ 1 /(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into up 16 bit words that are converted to a serial bit stream by said parallel to serial converter.
15. The signal processing unit of claim 13, wherein the transmitted ultrasound signal is comprised of a positive transmit pulse width and a negative transmit pulse width, and said transmit pulse width and negative transmit pulse width are independently adjustable.
16. The signal processing unit of claim 15, wherein the transmitted ultrasound signal is comprised of a number of cycles transmitted, and said number of cycles transmitted is adjustable.
17. The signal processing unit of claim 12, wherein said front end electronics module is constructed as a replaceable plug-in module.
18. The signal processing unit of claim 12, wherein the signal processing unit is adapted for acquiring the received ultrasound signal from the arrayed ultrasonic transducer and said arrayed ultrasonic transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
19. The signal processing unit of claim 12, wherein the transmitted center frequency is about 15 MHz up to at least 55 MHz.
20. The signal processing unit of claim 12, wherein the transmitted center frequency is about 15 MHz up to about 55 MHz.
21. A digital transmit beamformer for an arrayed ultrasound imaging system comprising: one or more FPGAs each having an FPGA clock frequency (FPGA fc); and a parallel to serial converter having a double data rate (DDR) output, wherein said digital transmit beamformer is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [1/(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into bit words that are converted to a serial bit stream by said parallel to serial converter.
22. The digital transmit beamformer of claim 21, wherein the transmitted ultrasound signal is comprised of a positive transmit pulse width and a negative transmit pulse width, and said transmit pulse width and negative transmit pulse width are independently adjustable.
23. The digital transmit beamformer of claim 22, wherein the transmitted ultrasound signal is comprised of a number of cycles transmitted, and said number of cycles transmitted is adjustable.
24. The digital transmit beamformer of claim 21, wherein the digital transmit beamformer is configured to be operatively connected with an arrayed ultrasonic transducer and said arrayed ultrasonic transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
25. The digital transmit beamformer of claim 21, wherein the transmitted center frequency is about 15 MHz up to at least 55 MHz.
26. The digital transmit beamformer of claim 21, wherein the transmitted center frequency is about 15 MHz up to about 55 MHz.
27. The digital transmit beamformer of claim 21, wherein said digital transmit beamformer is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [ 11(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into up 16 bit words that are converted to a serial bit stream by said parallel to serial converter.
28. An ultrasound imaging system, comprising: an arrayed ultrasonic transducer having a plurality of elements for transmitting into a subject a transmitted ultrasound signal at a transmitted center frequency of up to at least 55 megahertz (MHz); a signal processing unit operatively connected with said arrayed ultrasonic transducer, wherein said signal processing unit is further comprised of, a digital transmit beamformer subsystem, a receive beamformer subsystem, a front end electronics module, a beamformer control module, W
a signal processing module utilizing quadrature sampling and having a receive sampling frequency, and a computer unit; wherein said signal processing unit is configured so that said receive sampling frequency may be selectively chosen.
29. The ultrasound imaging system of claim 28, wherein said receive sampling frequency is chosen at a frequency different from said transmitted center frequency.
30. The ultrasound imaging system of claim 28, wherein said receive sampling frequency is chosen at a frequency the same as said transmitted center frequency.
31. The ultrasound imaging system of claim 28, wherein said digital transmit beamformer subsystem further comprises a transmit focal depth such that said receive sampling frequency is dependent upon said transmit focal depth.
32. The ultrasound imaging system of claim 30, wherein said receive sampling frequency decreases as said transmit focal depth increases.
33. The ultrasound imaging system of claim 28, wherein said digital transmit beamformer subsystem further comprises a parallel to serial converter having a double data rate (DDR) output, wherein said digital transmit beamformer is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [1/(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into bit words that are converted to a serial bit stream by said parallel to serial converter.
34. The ultrasound imaging system of claim 33, wherein said digital transmit beamformer subsystem is configured to transmit an ultrasound signal having a transmit center frequency up to at least 55 MHz with a delay resolution time of at least [ 1/(2 x FPGA fc)] or greater by encoding a fine delay and half-cycle sections of said transmitted ultrasound signal into up 16 bit words that are converted to a serial bit stream by said parallel to serial converter.
35. The ultrasound imaging system of claim 33, wherein the transmitted ultrasound signal is comprised of a positive transmit pulse width and a negative transmit pulse width, and said transmit pulse width and negative transmit pulse width are independently adjustable.
36. The ultrasound imaging system of claim 35, wherein the transmitted ultrasound signal is comprised of a number of cycles transmitted, and said number of cycles transmitted is adjustable.
37. The ultrasound imaging system of claim 28, wherein said front end electronics module is constructed as a replaceable plug-in module.
38. The ultrasound imaging system of claim 28, wherein the signal processing unit is adapted for acquiring the received ultrasound signal from the arrayed ultrasonic transducer and said arrayed ultrasonic transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
39. The ultrasound imaging system of claim 28, wherein the transmitted ultrasound signal has a transmitted center frequency of about 15 MHz up to at least 55 MHz.
40. The ultrasound imaging system of claim 28, wherein the transmitted ultrasound signal has a transmitted center frequency of about 15 MHz up to about 55 MHz.
41. An ultrasound imaging system, comprising: an arrayed ultrasonic transducer having a plurality of elements for transmitting into a subject a transmitted ultrasound signal at a transmitted center frequency of up to at least 55 megahertz (MHz), wherein said arrayed ultrasonic transducer has a field of view of at least 5.0 millimeters (mm); a signal processing unit operatively connected with said arrayed ultrasonic transducer, wherein said processing unit is further comprised of digital transmit and receive beamformer subsystems, a front end electronics module, a beamformer control and signal processing module, and a computer unit, wherein said signal processing unit is adapted for acquiring a received ultrasound signal from said arrayed ultrasound transducer having a plurality of elements at a frame rate of at least 20 frames per second (fps), wherein the received ultrasound signal has a frequency of at least 15 MHz.
42. The system of claim 41, wherein the signal processing unit further produces an ultrasound image from the received ultrasound signal.
43. The system of claim 42, wherein the received ultrasound signal is processed by said signal processing unit to generate the ultrasound image at a display rate that is slower than the acquisition rate.
44. The system of claim 43, wherein the display rate of the generated ultrasound image is about 100 fps or less.
45. The system of claim 44, wherein the display rate of the generated ultrasound image is about 30 fps or less.
46. The system of claim 42, wherein the ultrasound image is produced by said signal processing unit in an ultrasound mode selected from the group consisting of B-mode, M-mode, Doppler mode, KF-mode, and 3-D mode.
47. The system of claim 41, wherein the signal processing unit is adapted for acquiring the received ultrasound signal from the arrayed ultrasonic transducer and said arrayed ultrasonic transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
48. The system of claim 41, wherein the arrayed ultrasonic transducer has a center operating frequency of at least 15 MHz and the arrayed ultrasonic transducer has an element pitch equal to or less than 2.0 times the wavelength of sound at the arrayed ultrasonic transducer's transmitted center frequency.
49. The system of claim 41 , wherein the arrayed ultrasonic transducer has an element pitch equal to or less than 1.5 times the wavelength of sound at the arrayed ultrasonic transducer's transmitted center frequency.
50. The system of claim 41, wherein the digital receive beamformer subsystem comprises using at least one Field Programmable Gate Array (FPGA) device.
51. The system of claim 41 , wherein the digital transmit beamformer subsystem comprises using at least one Field Programmable Gate Array (FPGA) device.
52. The system of claim 41, wherein the front end electronics module further comprises a transmit circuit and a receive channel, wherein the transmit circuit comprises a transmit supply voltage connected through two field-effect transistors (FETs) to a transformer with a center- tapped winding in which the arrayed ultrasonic transducer is operatively connected to a first end of a secondary winding of the transformer and an input to the receive channel is connected to a second end of the secondary winding of the transformer such that said transmit supply voltage is set to substantially zero and said two FETs are turned on when said receive channel is receiving a signal and said transformer generates a transmit signal and couples the transmit signal to the arrayed ultrasonic transducer when said transmit circuit is transmitting a signal.
53. The system of claim 52, wherein the front end electronics module further comprises two or more signal samplers for each receive channel.
54. The system of claim 53, wherein the signal samplers are analog to digital converters.
55. The system of claim 53, wherein the signal samplers use quadrature sampling to sample a received signal.
56. The system of claim 55, wherein the signal samplers comprise sampling clocks shifted 90 degrees out of phase.
57. The system of claim 56, wherein the sampling clocks have a receive clock period approximately equal to the center frequency of a received ultrasound signal.
58. The system of claim 57, wherein a delay resolution of less than the receive clock period is used to process the acquired signal.
59. The system of claim 58, wherein the delay resolution is 1/16 of the receive clock period
60. The system of claim 41, wherein each element of said arrayed ultrasonic transducer having a plurality of elements is operatively connected to a receive channel.
61. The system of claim 60, wherein the number of elements of said arrayed ultrasonic transducer having a plurality of elements is greater than the number of receive channels.
62. The system of claim 60, wherein the arrayed ultrasonic transducer having a plurality of elements comprises at least 64 elements that are operatively connected to at least 32 receive channels.
63. The system of claim 60, wherein the arrayed ultrasonic transducer having a plurality of elements comprises 256 elements that are operatively connected to 64 receive channels.
64. The system of claim 60, wherein the arrayed ultrasonic transducer having a plurality of elements comprises 256 elements that are operatively connected to 128 receive channels.
65. The system of claim 60, wherein the arrayed ultrasonic transducer having a plurality of elements comprises 256 elements that are operatively connected to 256 receive channels.
66. The system of claim 60, wherein the arrayed ultrasonic transducer having a plurality of elements comprises 256 elements.
67. The system of claim 66, wherein 512 lines of ultrasound are generated, transmitted into the subject and received from the subject for each frame of the generated ultrasound image.
68. The system of claim 66, wherein 256 lines of ultrasound are generated, transmitted into the subject and received from the subject for each frame of the generated ultrasound image.
69. The system of claim 41, wherein at least two lines of ultrasound are generated, transmitted into the subject and received from the subject at each element of the array for each frame of the generated ultrasound image.
70. The system of claim 41, wherein one line of ultrasound is generated, transmitted into the subject and received from the subject at each element of the array for each frame of the generated ultrasound image.
71. The system of claim 70, wherein the received ultrasound signals are acquired at an acquisition rate of at least 200 frames per second (fps).
72. The system of claim 41, wherein the elements of the arrayed ultrasonic transducer having a plurality of elements are separated by a distance equal to the wavelength of the center transmit frequency of the transducer.
73. The system of claim 72, wherein the center transmit frequency is selected from the group consisting of 15 MHz, 20MHz, 30MHz, 40MHz, 50MHz, and 55 MHz.
74. The system of claim 41, wherein a length of the arrayed ultrasonic transducer having a plurality of elements is equal to a field of view of the transducer.
75. The system of claim 41, wherein the arrayed ultrasonic transducer having a plurality of elements can transmit ultrasound into the subject at a center frequency within the range of about 15 MHz to about 80 MHz.
76. The system of claim 41 , wherein the received ultrasound signal is acquired at a frame rate of at least 200 fps.
77. The system of claim 41, wherein the received ultrasound signal is acquired at a frame rate within the range of about 100 fps to about 200 fps.
78. The system of claim 41, wherein the ultrasound image has a lateral resolution of about 150 microns (μm) or less.
79. The system of claim 78, wherein the ultrasound image has a axial resolution of about 75 microns (μm) or less.
80. The system of claim 79, wherein the ultrasound image has a spatial resolution of about 30 microns (μm) or less.
81. The system of claim 41 , wherein the transmitted ultrasound signal can be focused at a depth of about 1.0 mm to about 30.0 mm.
82. The system of claim 81, wherein the transmitted ultrasound signal can be focused at a depth of about 3.0 mm to about 10.0 mm.
83. The system of claim 81, wherein the transmitted ultrasound signal can be focused at a depth of about 2.0 mm to about 12.0 mm.
84. The system of claim 81, wherein the transmitted ultrasound signal can be focused at a depth of about 1.0 mm to about 6.0 mm.
85. The system of claim 81 , wherein the transmitted ultrasound signal can be focused at a depth of about 3.0 mm to about 8.0 mm.
86. The system of claim 81 , wherein the transmitted ultrasound signal can be focused at a depth of about 5.0 mm to about 18.0 mm.
87. A system for producing an ultrasound image, comprising: an arrayed ultrasound transducer having a plurality of elements for generating and transmitting into a subject ultrasound at a center operating frequency of up to at least 55 megahertz (MHz) and for receiving ultrasound signals from the subject; and a processing unit for generating an ultrasound image frame, wherein each element of the arrayed ultrasound transducer can transmit two or more lines of ultrasound into the subject and receive from the subject two or more lines of echoed ultrasound for each frame of the generated ultrasound image.
88. The system of claim 87, wherein the arrayed ultrasound transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
89. The system of claim 87, wherein the arrayed ultrasound transducer's center operating frequency is at least 15 MHz and the arrayed ultrasound transducer has an element pitch equal to or less than two times the wavelength of sound at the arrayed ultrasound transducer's transmitted center frequency.
90. A system for producing an ultrasound image, comprising: an arrayed ultrasound transducer having a plurality of elements for generating and transmitting into a subject ultrasound at a frequency of up to at least 55 megahertz (MHz) and for receiving ultrasound signals from the subject, each element being operatively connected to a receive channel; and a processing unit for acquiring the received ultrasound signals and for producing an ultrasound image from the acquired signals, wherein the processing unit comprises a plurality of signal samplers that use quadrature sampling for acquiring the signals.
91. The system of claim 90, wherein the ultrasound image is produced in an ultrasound mode selected from the group consisting of B-mode, M-mode, Doppler mode, RF-mode, and 3-D mode.
92. The system of claim 90, wherein the arrayed ultrasound transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
93. The system of claim 90, wherein the arrayed ultrasound transducer has a center operating frequency of at least 20 MHz and the arrayed ultrasound transducer has an elementpitch equal to or less than two times the wavelength of sound at the arrayed ultrasound transducer's transmitted center frequency.
94. The system of claim 90, wherein each element of the arrayed ultrasound transducer is operatively connected to a receive channel.
95. The system of claim 94, wherein the number of elements of the arrayed ultrasound transducer is greater than the number of receive channels.
96. The system of claim 94, wherein the processing unit comprises two or more signal samplers for each receive channel.
97. The system of claim 96, wherein the signal samplers are analog to digital converters.
98. The system of claim 97, wherein the signal samplers comprise sampling clocks shifted 90 degrees out of phase.
99. The system of claim 98, wherein the sampling clocks have a receive clock period approximately equal to the center frequency of a received ultrasound signal.
100. The system of claim 99, wherein a delay resolution of less than the receive clock period is used to process the acquired signal.
101. The system of claim 100, wherein the delay resolution is 1/16 of the receive clock period.
102. The system of claim 90, wherein an acquired signal is processed using an interpolation filtration method.
103. The system of claim 90, wherein the processing unit comprises a receive beamformer, wherein the receive beamformer is implemented using at least one Field Programmable Gate Array (FPGA) device.
104. The system of claim 90, wherein the processing unit comprises a transmit beamformer, wherein the transmit beamformer is implemented using at least one Field Programmable Gate Array (FPGA) device.
105. A system for producing an ultrasound image, comprising: an arrayed ultrasound transducer having a plurality of elements for generating and transmitting into a subject ultrasound at a frequency of up to at least 55 megahertz (MHz) and at a pulse repetition frequency (PRF) of at least 500 hertz (Hz), and for receiving ultrasound from the subject, the arrayed ultrasound transducer having a field of view of at least 5.0 millimeters (mm); and a processing unit for generating a color flow Doppler ultrasound image from the received ultrasound.
106. The system of claim 105, wherein the arrayed ultrasound transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
107. The system of claim 105, wherein the arrayed ultrasound transducer has a center operating frequency of at least 20 MHz and the arrayed ultrasound transducer has an element pitch equal to or less than two times the wavelength of sound at the arrayed ultrasound transducer's transmitted center frequency.
108. The system of claim 105, wherein the PRF is between about 500 Hz to about 75 KHz.
109. A system for producing an ultrasound image, comprising: an arrayed ultrasound transducer having a plurality of elements for generating and transmitting into a subject ultrasound at a frequency of up to at least 55 megahertz (MHz) and at a pulse repetition frequency (PRF) of at least 500 hertz (Hz), and for receiving ultrasound from the subject, the arrayed ultrasound transducer having a field of view of at least 5.0 millimeters (mm); and a processing unit for generating a pulsed wave Doppler ultrasound image from the received ultrasound.
104. The system of claim 109, wherein the arrayed ultrasound transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
105. The system of claim 109, wherein the arrayed ultrasound transducer has a center operating frequency of at least 15 MHz and the arrayed ultrasound transducer has an element pitch equal to or less than two times the wavelength of sound at the arrayed ultrasound transducer's transmitted center frequency.
106. The system of claim 109, wherein the PRF is between about 500 Hz to about 150 KHz.
113. A system for producing an ultrasound image, comprising: an arrayed ultrasound transducer having a plurality of elements for generating and transmitting into a subject ultrasound at a frequency of up to at least 15 megahertz (MHz) and for receiving ultrasound signals from the subject, the arrayed ultrasound transducer having a field of view of at least 2.0 millimeters (mm); a processing unit for acquiring the received ultrasound signals at an acquisition rate of at least 300 frames per second (fps) and for generating an ultrasound image from the acquired signals.
114. The system of claim 113, wherein the arrayed ultrasound transducer is selected from the group consisting of a linear array transducer, a phased array transducer, a two-dimensional (2-D) array transducer, and a curved array transducer.
115. The system of claim 113, wherein the arrayed ultrasound transducer has a center operating frequency of at least 20 MHz and the arrayed ultrasound transducer has an element pitch equal to or less than two times the wavelength of sound at the arrayed ultrasound transducer's transmitted center frequency.
116. The ultrasound imaging system of claim 1, wherein FPGA fc is the highest operable frequency of the one or more FPGA.
117. The signal processing unit of claim 12, wherein FPGA fc is the highest operable frequency of the one or more FPGA.
118. The digital; transmit beamformer of claim 21, wherein FPGA fc is the highest operable frequency of the one or more FPGA.
PCT/US2006/042891 2005-11-02 2006-11-02 High frequency array ultrasound system WO2007056104A2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
ES06827417T ES2402741T3 (en) 2005-11-02 2006-11-02 Digital transmission beam shaper for an ultrasonic transducer system with distribution
CN200680050246.3A CN101351724B (en) 2005-11-02 2006-11-02 High frequency array ultrasound system
EP06827417A EP1952175B1 (en) 2005-11-02 2006-11-02 Digital transmit beamformer for an arrayed ultrasound transducer system
CA2628100A CA2628100C (en) 2005-11-02 2006-11-02 High frequency array ultrasound system
JP2008539044A JP5630958B2 (en) 2005-11-02 2006-11-02 High frequency array ultrasound system
HK09106667.0A HK1129243A1 (en) 2005-11-02 2009-07-21 High frequency array ultrasound system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US73308905P 2005-11-02 2005-11-02
US73309105P 2005-11-02 2005-11-02
US60/733,089 2005-11-02
US60/733,091 2005-11-02

Publications (3)

Publication Number Publication Date
WO2007056104A2 true WO2007056104A2 (en) 2007-05-18
WO2007056104A9 WO2007056104A9 (en) 2007-07-12
WO2007056104A3 WO2007056104A3 (en) 2007-08-30

Family

ID=37865774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/042891 WO2007056104A2 (en) 2005-11-02 2006-11-02 High frequency array ultrasound system

Country Status (8)

Country Link
US (2) US7901358B2 (en)
EP (1) EP1952175B1 (en)
JP (4) JP5630958B2 (en)
CN (1) CN101351724B (en)
CA (2) CA2935422C (en)
ES (1) ES2402741T3 (en)
HK (1) HK1129243A1 (en)
WO (1) WO2007056104A2 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007140593A1 (en) 2006-06-02 2007-12-13 St. Michael's Hospital Ultrasonic evaluation of venous structures
WO2010055819A1 (en) * 2008-11-14 2010-05-20 株式会社 日立メディコ Ultrasonographic device and method for generating ultrasonogram
US7901358B2 (en) 2005-11-02 2011-03-08 Visualsonics Inc. High frequency array ultrasound system
EP2324337A1 (en) * 2008-08-18 2011-05-25 University Of Virginia Patent Foundation Front end circuitry for imaging systems and methods of use
CN103033807A (en) * 2011-09-30 2013-04-10 中国科学院声学研究所 Portable ultrasonic imaging system receiving front-end device
WO2013087402A1 (en) * 2011-12-13 2013-06-20 Robert Bosch Gmbh Apparatus for detecting audible signals and associated method
CN103826541A (en) * 2012-07-31 2014-05-28 株式会社东芝 Ultrasonic diagnostic device and control method
ES2525600R1 (en) * 2012-05-25 2015-01-29 Consejo Superior De Investigaciones Científicas (Csic) METHOD FOR REAL-TIME CONTROL OF DYNAMIC APPROACH IN ULTRASONIC IMAGE SYSTEMS AND DEVICE SAMPLING ADVANCED CALCULATOR ASSOCIATED WITH THE SAME
US9482748B2 (en) 2011-12-12 2016-11-01 Super Sonic Imagine Ultrasound imaging system, and a processing device used inside said ultrasound imaging system
US9935254B2 (en) 2008-09-18 2018-04-03 Fujifilm Sonosite, Inc. Methods for manufacturing ultrasound transducers and other components
WO2018087400A1 (en) 2016-11-14 2018-05-17 Koninklijke Philips N.V. Triple mode ultrasound imaging for anatomical, functional, and hemodynamical imaging
US10596597B2 (en) 2008-09-18 2020-03-24 Fujifilm Sonosite, Inc. Methods for manufacturing ultrasound transducers and other components
EP3518781A4 (en) * 2016-09-28 2020-06-17 Covidien LP System and method for parallelization of cpu and gpu processing for ultrasound imaging devices
US10772483B2 (en) 2015-08-07 2020-09-15 Olympus Corporation Imaging apparatus
US11715454B2 (en) 2018-08-16 2023-08-01 Samsung Medison Co. Ltd. Beamforming device, method of controlling the same, and ultrasound diagnostic apparatus

Families Citing this family (181)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8213467B2 (en) * 2004-04-08 2012-07-03 Sonosite, Inc. Systems and methods providing ASICs for use in multiple applications
US7230368B2 (en) 2004-04-20 2007-06-12 Visualsonics Inc. Arrayed ultrasonic transducer
WO2006044997A2 (en) * 2004-10-15 2006-04-27 The Trustees Of Columbia University In The City Of New York System and method for localized measurement and imaging of viscosity of tissues
US10687785B2 (en) 2005-05-12 2020-06-23 The Trustees Of Columbia Univeristy In The City Of New York System and method for electromechanical activation of arrhythmias
WO2006124603A2 (en) * 2005-05-12 2006-11-23 The Trustees Of Columbia University In The City Of New York System and method for electromechanical wave imaging of body structures
US10219815B2 (en) 2005-09-22 2019-03-05 The Regents Of The University Of Michigan Histotripsy for thrombolysis
WO2007058895A2 (en) * 2005-11-11 2007-05-24 Visualsonics Inc. Overlay image contrast enhancement
EP1963805A4 (en) * 2005-12-09 2010-01-06 Univ Columbia Systems and methods for elastography imaging
US7750536B2 (en) 2006-03-02 2010-07-06 Visualsonics Inc. High frequency ultrasonic transducer and matching layer comprising cyanoacrylate
US8183745B2 (en) * 2006-05-08 2012-05-22 The Penn State Research Foundation High frequency ultrasound transducers
US8150128B2 (en) * 2006-08-30 2012-04-03 The Trustees Of Columbia University In The City Of New York Systems and method for composite elastography and wave imaging
US20100262013A1 (en) * 2009-04-14 2010-10-14 Smith David M Universal Multiple Aperture Medical Ultrasound Probe
EP2088932B1 (en) 2006-10-25 2020-04-08 Maui Imaging, Inc. Method and apparatus to produce ultrasonic images using multiple apertures
US8312771B2 (en) 2006-11-10 2012-11-20 Siemens Medical Solutions Usa, Inc. Transducer array imaging system
US8490489B2 (en) 2006-11-10 2013-07-23 Siemens Medical Solutions Usa, Inc. Transducer array imaging system
US9295444B2 (en) 2006-11-10 2016-03-29 Siemens Medical Solutions Usa, Inc. Transducer array imaging system
US8499634B2 (en) * 2006-11-10 2013-08-06 Siemens Medical Solutions Usa, Inc. Transducer array imaging system
US20080114250A1 (en) * 2006-11-10 2008-05-15 Penrith Corporation Transducer array imaging system
CN101185580A (en) * 2006-11-15 2008-05-28 深圳迈瑞生物医疗电子股份有限公司 Method and apparatus for gathering ultrasonic diagnosis system high-speed radio-frequency echo wave data
US8147409B2 (en) * 2007-03-29 2012-04-03 Supertex, Inc. Method and apparatus for transducer excitation in medical ultrasound imaging
US7673274B2 (en) * 2007-04-19 2010-03-02 L3 Communications Integrated Systems, LP Datapipe interpolation device
US7717154B2 (en) * 2007-06-22 2010-05-18 Li-Ming Cheng Window coverings
JP2009005802A (en) * 2007-06-27 2009-01-15 Ge Medical Systems Global Technology Co Llc Ultrasonic imaging apparatus
ATE512375T1 (en) * 2007-07-13 2011-06-15 Ezono Ag OPTOELECTRIC ULTRASONIC SENSOR AND SYSTEM
US20100256488A1 (en) * 2007-09-27 2010-10-07 University Of Southern California High frequency ultrasonic convex array transducers and tissue imaging
US9282945B2 (en) 2009-04-14 2016-03-15 Maui Imaging, Inc. Calibration of ultrasound probes
JP5555416B2 (en) * 2007-10-25 2014-07-23 三星メディソン株式会社 Ultrasonic diagnostic apparatus and scan line data forming method
JP2009219794A (en) * 2008-03-18 2009-10-01 Olympus Medical Systems Corp Ultrasonic diagnostic apparatus
WO2011035312A1 (en) 2009-09-21 2011-03-24 The Trustees Of Culumbia University In The City Of New York Systems and methods for opening of a tissue barrier
US20110196235A1 (en) 2008-04-22 2011-08-11 Allan Dunbar Ultrasound imaging system and method for providing assistance in an ultrasound imaging system
WO2010014977A1 (en) 2008-08-01 2010-02-04 The Trustees Of Columbia University In The City Of New York Systems and methods for matching and imaging tissue characteristics
KR101659910B1 (en) 2008-08-08 2016-09-27 마우이 이미징, 인코포레이티드 Imaging with multiple aperture medical ultrasound and synchronization of add-on systems
WO2010030819A1 (en) 2008-09-10 2010-03-18 The Trustees Of Columbia University In The City Of New York Systems and methods for opening a tissue
EP2345066B1 (en) * 2008-09-18 2018-10-31 FUJIFILM SonoSite, Inc. Methods for manufacturing ultrasound transducers and other components
US20100171395A1 (en) * 2008-10-24 2010-07-08 University Of Southern California Curved ultrasonic array transducers
FR2938918B1 (en) * 2008-11-21 2011-02-11 Commissariat Energie Atomique METHOD AND DEVICE FOR THE ACOUSTIC ANALYSIS OF MICROPOROSITIES IN MATERIALS SUCH AS CONCRETE USING A PLURALITY OF CMUTS TRANSDUCERS INCORPORATED IN THE MATERIAL
US8556850B2 (en) 2008-12-31 2013-10-15 St. Jude Medical, Atrial Fibrillation Division, Inc. Shaft and handle for a catheter with independently-deflectable segments
US8676290B2 (en) 2010-05-11 2014-03-18 St. Jude Medical, Atrial Fibrillation Division, Inc. Multi-directional catheter control handle
US8781201B2 (en) * 2009-03-04 2014-07-15 Robert E. Sandstrom Method of operating a pathology laboratory
KR101659723B1 (en) 2009-04-14 2016-09-26 마우이 이미징, 인코포레이티드 Multiple aperture ultrasound array alignment fixture
JP2012523904A (en) * 2009-04-17 2012-10-11 ビジュアルソニックス インコーポレイテッド Method for nonlinear imaging of ultrasound contrast agents at high frequencies
US8157738B2 (en) * 2009-06-02 2012-04-17 Samplify Systems, Inc. Ultrasound signal compression
CN101601594B (en) * 2009-07-08 2012-01-18 汕头市超声仪器研究所有限公司 Excitation method of medical B-ultrasound front-end excitation device
WO2011036891A1 (en) * 2009-09-28 2011-03-31 パナソニック株式会社 Ultrasonic diagnostic device
TW201115409A (en) * 2009-10-29 2011-05-01 Hannspree Inc A mouse capable of generating vapor
US20130338498A1 (en) * 2009-11-02 2013-12-19 Board Of Regents, The University Of Texas System Catheter for Intravascular Ultrasound and Photoacoustic Imaging
US9649089B2 (en) * 2009-11-17 2017-05-16 B-K Medical Aps Portable ultrasound scanner and docking system
EP2536339B1 (en) 2010-02-18 2024-05-15 Maui Imaging, Inc. Point source transmission and speed-of-sound correction using multi-aperture ultrasound imaging
US8439840B1 (en) * 2010-05-04 2013-05-14 Sonosite, Inc. Ultrasound imaging system and method with automatic adjustment and/or multiple sample volumes
US9289147B2 (en) 2010-05-11 2016-03-22 St. Jude Medical, Atrial Fibrillation Division, Inc. Multi-directional flexible wire harness for medical devices
WO2011148275A1 (en) 2010-05-26 2011-12-01 Koninklijke Philips Electronics N.V. High volume rate 3d ultrasonic diagnostic imaging of the heart
JP5965898B2 (en) * 2010-05-26 2016-08-10 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. High volume rate 3D ultrasound imaging
KR101999078B1 (en) 2010-06-09 2019-07-10 리전츠 오브 더 유니버스티 오브 미네소타 Dual mode ultrasound transducer (dmut) system and method for controlling delivery of ultrasound therapy
EP2584971B1 (en) * 2010-06-23 2021-11-10 Analog Devices, Inc. Ultrasound imaging with analog processing
US9513368B2 (en) * 2010-06-30 2016-12-06 General Electric Company Method and system for ultrasound data processing
WO2012051305A2 (en) 2010-10-13 2012-04-19 Mau Imaging, Inc. Multiple aperture probe internal apparatus and cable assemblies
EP3563768A3 (en) 2010-10-13 2020-02-12 Maui Imaging, Inc. Concave ultrasound transducers and 3d arrays
CN101972154A (en) * 2010-11-22 2011-02-16 中国医学科学院生物医学工程研究所 Ultrasound emitting system for high frequency ultrasonic diagnostic equipment
CN102109601B (en) * 2010-12-06 2013-07-10 王茂森 Sonar camera
CN102551791B (en) * 2010-12-17 2016-04-27 深圳迈瑞生物医疗电子股份有限公司 A kind of ultrasonic imaging method and device
EP3495022B1 (en) 2011-04-14 2023-06-07 Regents of the University of Minnesota Vascular characterization using ultrasound imaging
US9320491B2 (en) 2011-04-18 2016-04-26 The Trustees Of Columbia University In The City Of New York Ultrasound devices methods and systems
USD726905S1 (en) 2011-05-11 2015-04-14 St. Jude Medical, Atrial Fibrillation Division, Inc. Control handle for a medical device
WO2012162664A1 (en) 2011-05-26 2012-11-29 The Trustees Of Columbia University In The City Of New York Systems and methods for opening of a tissue barrier in primates
US9775585B2 (en) * 2011-06-15 2017-10-03 Toshiba Medical Systems Corporation Variable power saving processing scheme for ultrasound beamformer functionality
US20150196279A1 (en) * 2011-10-18 2015-07-16 Riverside Research Institute Synthetic-focusing strategies for real-time annular-array imaging
US20130093901A1 (en) * 2011-10-18 2013-04-18 Riverside Research Institute Synthetic-focusing strategies for real-time annular-array imaging
TWI440878B (en) * 2011-10-27 2014-06-11 Ind Tech Res Inst Ultrasound receiving module, method and system
EP2771712B1 (en) 2011-10-28 2023-03-22 Decision Sciences International Corporation Spread spectrum coded waveforms in ultrasound imaging
JP6407719B2 (en) 2011-12-01 2018-10-17 マウイ イマギング,インコーポレーテッド Motion detection using ping base and multi-aperture Doppler ultrasound
JP2015503404A (en) 2011-12-29 2015-02-02 マウイ イマギング,インコーポレーテッド Arbitrary path M-mode ultrasound imaging
CN104135937B (en) 2012-02-21 2017-03-29 毛伊图像公司 Material stiffness is determined using porous ultrasound
CN103284753B (en) * 2012-02-22 2015-12-09 香港理工大学 Ultrasonic imaging system and formation method
US20130242493A1 (en) * 2012-03-13 2013-09-19 Qualcomm Mems Technologies, Inc. Low cost interposer fabricated with additive processes
EP2833791B1 (en) 2012-03-26 2022-12-21 Maui Imaging, Inc. Methods for improving ultrasound image quality by applying weighting factors
EP2809236B1 (en) * 2012-04-09 2019-12-11 St. Jude Medical Atrial Fibrillation Division, Inc. System comprising a medical device and a pair of wire harnesses
IN2015DN00556A (en) 2012-08-10 2015-06-26 Maui Imaging Inc
EP3893022A1 (en) 2012-09-06 2021-10-13 Maui Imaging, Inc. Ultrasound imaging system memory architecture
WO2014059170A1 (en) 2012-10-10 2014-04-17 The Trustees Of Columbia University In The City Of New York Systems and methods for mechanical mapping of cardiac rhythm
JP6383731B2 (en) * 2012-12-28 2018-08-29 ボルケーノ コーポレイション Synthetic aperture image reconstruction system in patient interface module (PIM)
US9717141B1 (en) * 2013-01-03 2017-07-25 St. Jude Medical, Atrial Fibrillation Division, Inc. Flexible printed circuit with removable testing portion
US9232933B2 (en) * 2013-02-01 2016-01-12 Kabushiki Kaisha Toshiba Transformer-based multiplexer for ultrasound imaging system and method
US9510806B2 (en) 2013-03-13 2016-12-06 Maui Imaging, Inc. Alignment of ultrasound transducer arrays and multiple aperture probe assembly
US20140276197A1 (en) * 2013-03-15 2014-09-18 Greer Laboratories, Inc. Apparatus and method for determining treatment endpoints for allergen testing
US9211110B2 (en) 2013-03-15 2015-12-15 The Regents Of The University Of Michigan Lung ventillation measurements using ultrasound
CN103175900B (en) * 2013-03-19 2016-02-17 中国科学院声学研究所 A kind of phased-array non-destructive inspection device and system
US9188664B2 (en) 2013-05-31 2015-11-17 eagleyemed, Inc. Ultrasound image enhancement and super-resolution
US9247921B2 (en) 2013-06-07 2016-02-02 The Trustees Of Columbia University In The City Of New York Systems and methods of high frame rate streaming for treatment monitoring
CN109044407A (en) 2013-07-23 2018-12-21 明尼苏达大学评议会 It is formed and/or is rebuild using the ultrasound image of multi-frequency waveform
US10322178B2 (en) 2013-08-09 2019-06-18 The Trustees Of Columbia University In The City Of New York Systems and methods for targeted drug delivery
WO2015027164A1 (en) 2013-08-22 2015-02-26 The Regents Of The University Of Michigan Histotripsy using very short ultrasound pulses
US10028723B2 (en) 2013-09-03 2018-07-24 The Trustees Of Columbia University In The City Of New York Systems and methods for real-time, transcranial monitoring of blood-brain barrier opening
US9844359B2 (en) 2013-09-13 2017-12-19 Decision Sciences Medical Company, LLC Coherent spread-spectrum coded waveforms in synthetic aperture image formation
US9883848B2 (en) 2013-09-13 2018-02-06 Maui Imaging, Inc. Ultrasound imaging using apparent point-source transmit transducer
JP6223783B2 (en) * 2013-11-07 2017-11-01 三菱日立パワーシステムズ株式会社 Ultrasonic flaw detection sensor and ultrasonic flaw detection method
WO2015106027A1 (en) * 2014-01-08 2015-07-16 Klock John C Quantitative transmission ultrasound imaging of dense anatomical structures
JP2017507736A (en) 2014-03-12 2017-03-23 フジフィルム ソノサイト インコーポレイテッド High frequency ultrasonic transducer with ultrasonic lens with integrated center matching layer
CN104013439A (en) * 2014-05-05 2014-09-03 苏州森斯凌传感技术有限公司 Ultrasonic superposition detection system based on voltage calibration
KR102617888B1 (en) 2014-08-18 2023-12-22 마우이 이미징, 인코포레이티드 Network-based ultrasound imaging system
US9945946B2 (en) 2014-09-11 2018-04-17 Microsoft Technology Licensing, Llc Ultrasonic depth imaging
JP6734270B2 (en) * 2014-10-30 2020-08-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Compressed sensing when forming ultrasound images
US10548571B1 (en) 2014-11-21 2020-02-04 Ultrasee Corp Fast 2D blood flow velocity imaging
US10989810B2 (en) * 2015-01-23 2021-04-27 Dalhousie University Systems and methods for beamforming using variable sampling
KR102387708B1 (en) 2015-01-30 2022-04-19 삼성메디슨 주식회사 Ultrasound System And Method For Providing Guide To Improve HPRF Doppler Image
KR20160097862A (en) * 2015-02-10 2016-08-18 삼성전자주식회사 Portable ultrasound apparatus, and control method for same
JP6835744B2 (en) 2015-02-25 2021-02-24 ディスィジョン サイエンシズ メディカル カンパニー,エルエルシー Kaplant device
KR20180094774A (en) * 2015-03-18 2018-08-24 디시전 사이선씨즈 메디컬 컴패니, 엘엘씨 Synthetic aperture ultrasound system
TWI536015B (en) * 2015-03-24 2016-06-01 佳世達科技股份有限公司 Ultrasound scanning system and ultrasound scanning method
US9251781B1 (en) 2015-04-06 2016-02-02 King Saud University Pulser logic method and system for an ultrasound beamformer
US9995638B2 (en) * 2015-04-30 2018-06-12 National Instruments Corporation Cold-junction-compensated input terminal of a thermocouple instrument
US10020783B2 (en) 2015-07-01 2018-07-10 Bei Electronics Llc Class D amplifier using Fs/4 modulation and envelope tracking power supplies
CN105030280B (en) * 2015-09-02 2019-03-05 宁波美童智能科技有限公司 A kind of intelligent wireless ultrasonic fetal imaging system
US10413274B2 (en) * 2015-09-02 2019-09-17 Ningbo Marvoto Intelligent Technology Co., Ltd Method for controlling wireless intelligent ultrasound fetal imaging system
CA3001315C (en) 2015-10-08 2023-12-19 Decision Sciences Medical Company, LLC Acoustic orthopedic tracking system and methods
US10813624B2 (en) * 2015-10-30 2020-10-27 Carestream Health, Inc. Ultrasound display method
EP3383275B1 (en) * 2015-11-25 2021-01-06 Fujifilm Sonosite, Inc. High frequency ultrasound transducer and method for manufacture
CA3004356C (en) * 2015-11-25 2024-04-23 Fujifilm Sonosite, Inc. Medical instrument including high frequency ultrasound transducer array
EP3408037A4 (en) 2016-01-27 2019-10-23 Maui Imaging, Inc. Ultrasound imaging with sparse array probes
US20170307755A1 (en) 2016-04-20 2017-10-26 YoR Labs Method and System for Determining Signal Direction
WO2017184181A1 (en) * 2016-04-22 2017-10-26 Chirp Micro, Inc. Ultrasonic input device
US10132924B2 (en) * 2016-04-29 2018-11-20 R2Sonic, Llc Multimission and multispectral sonar
US10401492B2 (en) * 2016-05-31 2019-09-03 yoR Labs, Inc. Methods and systems for phased array returning wave front segmentation
EP3518780A4 (en) * 2016-09-29 2020-07-01 Exact Imaging Inc. Signal processing pathway for an ultrasonic imaging device
WO2018065254A1 (en) * 2016-10-03 2018-04-12 Koninklijke Philips N.V. Intraluminal imaging devices with a reduced number of signal channels
JP6822078B2 (en) * 2016-11-08 2021-01-27 コニカミノルタ株式会社 Control device and control method for ultrasonic diagnostic equipment
WO2018091341A1 (en) * 2016-11-17 2018-05-24 Koninklijke Philips N.V. Ultrasound system and method for detection of kidney stones using twinkling artifact
EP3336485B1 (en) 2016-12-15 2020-09-23 Safran Landing Systems UK Limited Aircraft assembly including deflection sensor
JP7333273B2 (en) * 2017-05-11 2023-08-24 コーニンクレッカ フィリップス エヌ ヴェ Echo Artifact Cancellation in Diagnostic Ultrasound Images
JP6933016B2 (en) * 2017-06-22 2021-09-08 コニカミノルタ株式会社 Radiation imaging system
EP3435116A1 (en) 2017-07-24 2019-01-30 Koninklijke Philips N.V. An ultrasound probe and processing method
CN109381218B (en) * 2017-08-04 2021-08-20 香港理工大学深圳研究院 Three-dimensional ultrasonic imaging method and device
CN107566029B (en) * 2017-08-28 2020-04-28 西南电子技术研究所(中国电子科技集团公司第十研究所) Space network on-demand access system
TWI743411B (en) * 2017-11-08 2021-10-21 美商富士膠片索諾聲公司 Ultrasound system with high frequency detail
US11458337B2 (en) 2017-11-28 2022-10-04 Regents Of The University Of Minnesota Adaptive refocusing of ultrasound transducer arrays using image data
US20190167148A1 (en) * 2017-12-04 2019-06-06 Bard Access Systems, Inc. Systems And Methods For Visualizing Anatomy, Locating Medical Devices, Or Placing Medical Devices
CN110095778B (en) * 2018-01-29 2021-05-28 中国石油天然气股份有限公司 Storage tank defect detection device, system and method
US11596812B2 (en) 2018-04-06 2023-03-07 Regents Of The University Of Minnesota Wearable transcranial dual-mode ultrasound transducers for neuromodulation
CN108924353A (en) * 2018-06-29 2018-11-30 努比亚技术有限公司 anti-interference method, mobile terminal and computer readable storage medium
CN108924955B (en) * 2018-07-30 2021-12-14 山东大骋医疗科技有限公司 CT data transmission and control method and device based on double-chain wireless communication
CN111050060B (en) * 2018-10-12 2021-08-31 华为技术有限公司 Focusing method and device applied to terminal equipment and terminal equipment
WO2020113083A1 (en) 2018-11-28 2020-06-04 Histosonics, Inc. Histotripsy systems and methods
KR20210114497A (en) * 2019-01-15 2021-09-23 엑소 이미징, 인크. Synthetic Lenses for Ultrasound Imaging Systems
CN109814110B (en) * 2019-02-21 2022-05-17 哈尔滨工程大学 Array arrangement method for deep-sea long-baseline positioning array topology structure
TWI706641B (en) * 2019-03-06 2020-10-01 奔騰智慧生醫股份有限公司 A structure and a processing method of system with multi-beam and micro-beamforming
CA3130104A1 (en) 2019-03-06 2020-09-10 Decision Sciences Medical Company, LLC Methods for manufacturing and distributing semi-rigid acoustic coupling articles and packaging for ultrasound imaging
US11464486B2 (en) * 2019-03-19 2022-10-11 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Wireless transducer charging for handheld ultrasound systems
CN110037741B (en) * 2019-04-08 2024-02-20 深圳市贝斯曼精密仪器有限公司 Blood flow velocity detection system
US11154274B2 (en) 2019-04-23 2021-10-26 Decision Sciences Medical Company, LLC Semi-rigid acoustic coupling articles for ultrasound diagnostic and treatment applications
CN110044645B (en) * 2019-05-07 2024-08-02 中国铁道科学研究院集团有限公司 Contact rail protective cover state detection system and method
US11032167B2 (en) * 2019-06-14 2021-06-08 Apple Inc. Precursor rejection filter
US11529127B2 (en) * 2019-06-25 2022-12-20 Bfly Operations, Inc. Methods and apparatuses for processing ultrasound signals
CN110327077B (en) * 2019-07-09 2022-04-15 深圳开立生物医疗科技股份有限公司 Blood flow display method and device, ultrasonic equipment and storage medium
WO2021020043A1 (en) * 2019-07-26 2021-02-04 富士フイルム株式会社 Ultrasonic diagnosis apparatus and control method of ultrasonic diagnosis apparatus
KR102335321B1 (en) * 2019-12-10 2021-12-08 한국과학기술연구원 Ultrasonic therapy and diagnosis apparatus implementing multiple functions using detachable circuit boards
US11813485B2 (en) 2020-01-28 2023-11-14 The Regents Of The University Of Michigan Systems and methods for histotripsy immunosensitization
US11493979B2 (en) * 2020-02-27 2022-11-08 Fujifilm Sonosite, Inc. Dynamic power reduction technique for ultrasound systems
US11547386B1 (en) 2020-04-02 2023-01-10 yoR Labs, Inc. Method and apparatus for multi-zone, multi-frequency ultrasound image reconstruction with sub-zone blending
US11998391B1 (en) 2020-04-02 2024-06-04 yoR Labs, Inc. Method and apparatus for composition of ultrasound images with integration of “thick-slice” 3-dimensional ultrasound imaging zone(s) and 2-dimensional ultrasound zone(s) utilizing a multi-zone, multi-frequency ultrasound image reconstruction scheme with sub-zone blending
US10877124B1 (en) * 2020-06-23 2020-12-29 Charles A Uzes System for receiving communications
EP4168826A1 (en) * 2020-06-23 2023-04-26 Koninklijke Philips N.V. Ultrasound transducer probe based analog to digital conversion for continuous wave doppler, and associated devices, systems, and methods
CN111983629B (en) * 2020-08-14 2024-03-26 西安应用光学研究所 Linear array signal target extraction device and extraction method
US11344281B2 (en) 2020-08-25 2022-05-31 yoR Labs, Inc. Ultrasound visual protocols
US11832991B2 (en) 2020-08-25 2023-12-05 yoR Labs, Inc. Automatic ultrasound feature detection
CN116685847A (en) 2020-11-13 2023-09-01 决策科学医疗有限责任公司 System and method for synthetic aperture ultrasound imaging of objects
US11751850B2 (en) 2020-11-19 2023-09-12 yoR Labs, Inc. Ultrasound unified contrast and time gain compensation control
US11704142B2 (en) 2020-11-19 2023-07-18 yoR Labs, Inc. Computer application with built in training capability
US11504093B2 (en) * 2021-01-22 2022-11-22 Exo Imaging, Inc. Equalization for matrix based line imagers for ultrasound imaging systems
US11683829B2 (en) * 2021-05-31 2023-06-20 Clarius Mobile Health Corp. Systems and methods for improving quality of service when transmitting ultrasound image data over a wireless connection
EP4351447A1 (en) * 2021-06-07 2024-04-17 The Regents of The University of Michigan All-in-one ultrasound systems and methods including histotripsy
US12053330B2 (en) 2021-06-23 2024-08-06 Exo Imaging, Inc. Systems and methods for testing MEMS arrays and associated ASICs
CN118541603A (en) * 2021-09-13 2024-08-23 云溪医疗影像公司 Ultrasound imaging using focused beams to reduce mechanical and thermal index
WO2023075756A1 (en) * 2021-10-26 2023-05-04 Exo Imaging, Inc. Multi-transducer chip ultrasound device
US12099150B2 (en) 2021-10-26 2024-09-24 Exo Imaging, Inc. Multi-transducer chip ultrasound device
US12076684B2 (en) * 2021-11-22 2024-09-03 GE Precision Healthcare LLC Method and system for automatically cleaning air filters of a medical imaging system
US11998387B2 (en) 2022-01-12 2024-06-04 Exo Imaging, Inc. Multilayer housing seals for ultrasound transducers
US12089991B2 (en) 2022-05-18 2024-09-17 Verasonics, Inc. Ultrasound transmitter with low distortion and concurrent receive
WO2023239913A1 (en) * 2022-06-09 2023-12-14 Bfly Operations, Inc. Point of care ultrasound interface
CN115237308B (en) * 2022-06-29 2024-06-18 青岛海信医疗设备股份有限公司 Ultrasound image amplification method and ultrasound device
US11881875B1 (en) 2022-08-25 2024-01-23 Stmicroelectronics S.R.L. Waveform generator using a waveform coding scheme for both long states and toggle states
US20240215952A1 (en) * 2022-12-29 2024-07-04 Wuhan United Imaging Healthcare Co., Ltd. Control method for medical probe, imaging method, and ultrasound system
CN116184374B (en) * 2023-01-01 2024-06-21 云南保利天同水下装备科技有限公司 Driving signal generating method and driving signal generating system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US53653A (en) 1866-04-03 Improvement in harvester-rakes
US53748A (en) 1866-04-03 Improved water-can for railroad-cars
US683168A (en) 1900-12-14 1901-09-24 William V Bleha Hat-hanger.
US683870A (en) 1901-01-17 1901-10-01 James O Wright Dredge-bucket.
US736232A (en) 1903-02-18 1903-08-11 Cambridge Mfg Company Golf-ball.
US6083164A (en) 1997-06-27 2000-07-04 Siemens Medical Systems, Inc. Ultrasound front-end circuit combining the transmitter and automatic transmit/receiver switch
US6851392B2 (en) 2002-10-10 2005-02-08 Visual Sonics Small-animal mount assembly
US20050272183A1 (en) 2004-04-20 2005-12-08 Marc Lukacs Arrayed ultrasonic transducer
US7052460B2 (en) 2003-05-09 2006-05-30 Visualsonics Inc. System for producing an ultrasound image using line-based image reconstruction
US10998605B2 (en) 2017-10-25 2021-05-04 Tesat-Spacecom Gmbh & Co. Kg Connecting unit for connecting to first and second interfaces, where the connecting unit comprises an internal conductor disposed within a housing formed by half-shell construction

Family Cites Families (209)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2205169A (en) 1937-05-06 1940-06-18 Hallman Abram Signal structure
US3922572A (en) 1974-08-12 1975-11-25 Us Navy Electroacoustical transducer
US4217684A (en) 1979-04-16 1980-08-19 General Electric Company Fabrication of front surface matched ultrasonic transducer array
JPS584640Y2 (en) 1979-11-02 1983-01-26 日立機電工業株式会社 Device for detecting the falling state of fermented materials in each fermentation chamber in a multi-stage fermenter
US4385255A (en) 1979-11-02 1983-05-24 Yokogawa Electric Works, Ltd. Linear array ultrasonic transducer
FR2485858B1 (en) 1980-06-25 1986-04-11 Commissariat Energie Atomique METHOD FOR MANUFACTURING ULTRASONIC TRANSDUCERS OF COMPLEX SHAPES AND APPLICATION TO OBTAINING ANNULAR TRANSDUCERS
US4360007A (en) 1980-08-05 1982-11-23 Yeda Research And Development Co., Ltd. Remote controlled magnetic actuator particularly for an implantable device like a valve
DE3301967A1 (en) 1983-01-21 1984-07-26 Siemens AG, 1000 Berlin und 8000 München ULTRASONIC IMAGING SYSTEM
DE3435569A1 (en) 1984-09-27 1986-04-10 Siemens AG, 1000 Berlin und 8000 München METHOD FOR PRODUCING AN ARRAY ULTRASONIC ANTENNA
US4802099A (en) 1986-01-03 1989-01-31 International Business Machines Corporation Physical parameter balancing of circuit islands in integrated circuit wafers
US4809184A (en) 1986-10-22 1989-02-28 General Electric Company Method and apparatus for fully digital beam formation in a phased array coherent imaging system
US4841977A (en) 1987-05-26 1989-06-27 Inter Therapy, Inc. Ultra-thin acoustic transducer and balloon catheter using same in imaging array subassembly
DE3829999A1 (en) 1988-09-01 1990-03-15 Schering Ag ULTRASONIC METHOD AND CIRCUITS THEREOF
US5410516A (en) 1988-09-01 1995-04-25 Schering Aktiengesellschaft Ultrasonic processes and circuits for performing them
US5014710A (en) 1988-09-13 1991-05-14 Acuson Corporation Steered linear color doppler imaging
US5759791A (en) 1989-01-17 1998-06-02 The Johns Hopkins University Cancer related antigen
DE58906448D1 (en) 1989-02-22 1994-01-27 Siemens Ag Ultrasonic array with trapezoidal vibrating elements and method and device for its production.
US4945155A (en) 1989-05-11 1990-07-31 Eastman Kodak Company Preparation of low color copoly(arylene sulfide) by heating copoly(arylene sulfide)
US5065068A (en) 1989-06-07 1991-11-12 Oakley Clyde G Ferroelectric ceramic transducer
EP0410020B1 (en) 1989-07-24 1994-11-17 Palitex Project-Company GmbH Process and apparatus for automatically cleaning bobbin containers and/or the anti-ballooning devices of a two-for-one twisting spindle of a two-for-one twisting machine
US5014712A (en) * 1989-12-26 1991-05-14 General Electric Company Coded excitation for transmission dynamic focusing of vibratory energy beam
US5160870A (en) 1990-06-25 1992-11-03 Carson Paul L Ultrasonic image sensing array and method
US5123415A (en) 1990-07-19 1992-06-23 Advanced Technology Laboratories, Inc. Ultrasonic imaging by radial scan of trapezoidal sector
US5445155A (en) 1991-03-13 1995-08-29 Scimed Life Systems Incorporated Intravascular imaging apparatus and methods for use and manufacture
DE4209394C2 (en) 1991-03-26 1996-07-18 Hitachi Ltd Ultrasound imaging device
GB2258364A (en) 1991-07-30 1993-02-03 Intravascular Res Ltd Ultrasonic tranducer
US5325860A (en) 1991-11-08 1994-07-05 Mayo Foundation For Medical Education And Research Ultrasonic and interventional catheter and method
US5704361A (en) 1991-11-08 1998-01-06 Mayo Foundation For Medical Education And Research Volumetric image ultrasound transducer underfluid catheter system
US5713363A (en) 1991-11-08 1998-02-03 Mayo Foundation For Medical Education And Research Ultrasound catheter and method for imaging and hemodynamic monitoring
US5186177A (en) 1991-12-05 1993-02-16 General Electric Company Method and apparatus for applying synthetic aperture focusing techniques to a catheter based system for high frequency ultrasound imaging of small vessels
DE4142372A1 (en) 1991-12-20 1993-06-24 Siemens Ag Ultrasound transducer array of elementary transducers arranged in a row e.g. for medical research - has elementary transducers connected to front and back terminals and connected to neighbouring transducers by piezo-ceramic connectors.
US5203335A (en) * 1992-03-02 1993-04-20 General Electric Company Phased array ultrasonic beam forming using oversampled A/D converters
US5318033A (en) 1992-04-17 1994-06-07 Hewlett-Packard Company Method and apparatus for increasing the frame rate and resolution of a phased array imaging system
US5329496A (en) 1992-10-16 1994-07-12 Duke University Two-dimensional array ultrasonic transducers
US5744898A (en) 1992-05-14 1998-04-28 Duke University Ultrasound transducer array with transmitter/receiver integrated circuitry
US5311095A (en) 1992-05-14 1994-05-10 Duke University Ultrasonic transducer array
DE4226865A1 (en) 1992-08-13 1994-03-10 Siemens Ag Ultrasonic dermatological diagnosis arrangement - contains applicator with ultrasonic transducer and image display forming hand guided diagnostic unit
US5368037A (en) 1993-02-01 1994-11-29 Endosonics Corporation Ultrasound catheter
US5453575A (en) 1993-02-01 1995-09-26 Endosonics Corporation Apparatus and method for detecting blood flow in intravascular ultrasonic imaging
US20070016071A1 (en) 1993-02-01 2007-01-18 Volcano Corporation Ultrasound transducer assembly
US5369624A (en) 1993-03-26 1994-11-29 Siemens Medical Systems, Inc. Digital beamformer having multi-phase parallel processing
US5388079A (en) 1993-03-26 1995-02-07 Siemens Medical Systems, Inc. Partial beamforming
US5345426A (en) 1993-05-12 1994-09-06 Hewlett-Packard Company Delay interpolator for digital phased array ultrasound beamformers
US5434827A (en) 1993-06-15 1995-07-18 Hewlett-Packard Company Matching layer for front acoustic impedance matching of clinical ultrasonic tranducers
US5460181A (en) 1994-10-06 1995-10-24 Hewlett Packard Co. Ultrasonic transducer for three dimensional imaging
US5371717A (en) 1993-06-15 1994-12-06 Hewlett-Packard Company Microgrooves for apodization and focussing of wideband clinical ultrasonic transducers
US5392259A (en) 1993-06-15 1995-02-21 Bolorforosh; Mir S. S. Micro-grooves for the design of wideband clinical ultrasonic transducers
US5465725A (en) 1993-06-15 1995-11-14 Hewlett Packard Company Ultrasonic probe
US5553035A (en) 1993-06-15 1996-09-03 Hewlett-Packard Company Method of forming integral transducer and impedance matching layers
US5505088A (en) 1993-08-27 1996-04-09 Stellartech Research Corp. Ultrasound microscope for imaging living tissues
US5792058A (en) 1993-09-07 1998-08-11 Acuson Corporation Broadband phased array transducer with wide bandwidth, high sensitivity and reduced cross-talk and method for manufacture thereof
US5415175A (en) 1993-09-07 1995-05-16 Acuson Corporation Broadband phased array transducer design with frequency controlled two dimension capability and methods for manufacture thereof
US5743855A (en) 1995-03-03 1998-04-28 Acuson Corporation Broadband phased array transducer design with frequency controlled two dimension capability and methods for manufacture thereof
US5438998A (en) 1993-09-07 1995-08-08 Acuson Corporation Broadband phased array transducer design with frequency controlled two dimension capability and methods for manufacture thereof
US5390674A (en) 1993-12-30 1995-02-21 Advanced Technology Laboratories, Inc. Ultrasonic imaging system with interpolated scan lines
DE19514307A1 (en) 1994-05-19 1995-11-23 Siemens Ag Duplexer for ultrasonic imaging system
JPH10507936A (en) 1994-08-05 1998-08-04 アキュソン コーポレイション Method and apparatus for a transmit beam generator system
US5623928A (en) 1994-08-05 1997-04-29 Acuson Corporation Method and apparatus for coherent image formation
US5685308A (en) 1994-08-05 1997-11-11 Acuson Corporation Method and apparatus for receive beamformer system
US6029116A (en) 1994-08-05 2000-02-22 Acuson Corporation Method and apparatus for a baseband processor of a receive beamformer system
US5522391A (en) 1994-08-09 1996-06-04 Hewlett-Packard Company Delay generator for phased array ultrasound beamformer
EP0696435A3 (en) 1994-08-10 1997-03-12 Hewlett Packard Co Utrasonic probe
US5544655A (en) 1994-09-16 1996-08-13 Atlantis Diagnostics International, Llc Ultrasonic multiline beamforming with interleaved sampling
US5655276A (en) 1995-02-06 1997-08-12 General Electric Company Method of manufacturing two-dimensional array ultrasonic transducers
GB9504751D0 (en) 1995-03-09 1995-04-26 Quality Medical Imaging Ltd Apparatus for ultrasonic tissue investigation
DE19514308A1 (en) 1995-04-18 1996-10-24 Siemens Ag Ultrasonic transducer head with integrated controllable amplifier devices
US5655538A (en) 1995-06-19 1997-08-12 General Electric Company Ultrasonic phased array transducer with an ultralow impedance backfill and a method for making
CN1189217A (en) * 1995-06-29 1998-07-29 垓技术公司 Portable ultrasound imaging system
US5573001A (en) 1995-09-08 1996-11-12 Acuson Corporation Ultrasonic receive beamformer with phased sub-arrays
US5706819A (en) 1995-10-10 1998-01-13 Advanced Technology Laboratories, Inc. Ultrasonic diagnostic imaging with harmonic contrast agents
US5629865A (en) 1995-10-23 1997-05-13 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Pulse-echo ultrasonic imaging method for eliminating sample thickness variation effects
WO1997017018A1 (en) 1995-11-09 1997-05-15 Brigham & Women's Hospital Aperiodic ultrasound phased array
US6236144B1 (en) 1995-12-13 2001-05-22 Gec-Marconi Limited Acoustic imaging arrays
GB9525418D0 (en) 1995-12-13 1996-07-17 Marconi Gec Ltd Acoustic imaging arrays
US5653236A (en) 1995-12-29 1997-08-05 General Electric Company Apparatus for real-time distributed computation of beamforming delays in ultrasound imaging system
JP3573567B2 (en) 1996-04-12 2004-10-06 株式会社日立メディコ Ultrasonic probe and ultrasonic inspection apparatus using the same
US5704105A (en) 1996-09-04 1998-01-06 General Electric Company Method of manufacturing multilayer array ultrasonic transducers
US5795297A (en) 1996-09-12 1998-08-18 Atlantis Diagnostics International, L.L.C. Ultrasonic diagnostic imaging system with personal computer architecture
US5879303A (en) 1996-09-27 1999-03-09 Atl Ultrasound Ultrasonic diagnostic imaging of response frequency differing from transmit frequency
US5865749A (en) 1996-11-07 1999-02-02 Data Sciences International, Inc. Blood flow meter apparatus and method of use
US6626838B2 (en) 1996-11-07 2003-09-30 Transoma Medical, Inc. Blood flow meter apparatus and method of use
US6530887B1 (en) 1996-12-24 2003-03-11 Teratech Corporation Ultrasound probe with integrated electronics
US5844139A (en) 1996-12-30 1998-12-01 General Electric Company Method and apparatus for providing dynamically variable time delays for ultrasound beamformer
US5797847A (en) 1996-12-30 1998-08-25 General Electric Company Method and apparatus for complex bandpass filtering and decimation in ultrasound beamformer
US5857974A (en) 1997-01-08 1999-01-12 Endosonics Corporation High resolution intravascular ultrasound transducer assembly having a flexible substrate
US5940123A (en) 1997-02-13 1999-08-17 Atl Ultrasound High resolution ultrasonic imaging through interpolation of received scanline data
US5796207A (en) 1997-04-28 1998-08-18 Rutgers, The State University Of New Jersey Oriented piezo electric ceramics and ceramic/polymer composites
US5906580A (en) 1997-05-05 1999-05-25 Creare Inc. Ultrasound system and method of administering ultrasound including a plurality of multi-layer transducer elements
US5938612A (en) 1997-05-05 1999-08-17 Creare Inc. Multilayer ultrasonic transducer array including very thin layer of transducer elements
US5897501A (en) 1997-05-07 1999-04-27 General Electric Company Imaging system with multiplexer for controlling a multi-row ultrasonic transducer array
US6074346A (en) 1997-06-27 2000-06-13 Siemens Medical Systems, Inc. Transmit/receive ultrasound front end circuit providing automatic transmit/receive switching
US6050945A (en) 1997-06-27 2000-04-18 Siemens Medical Systems, Inc. Ultrasound front-end circuit combining the transmitter and automatic transmit/receive switch with agile power level control
JPH1147104A (en) 1997-08-08 1999-02-23 Nippon Koden Corp Patient monitoring device
US6128958A (en) 1997-09-11 2000-10-10 The Regents Of The University Of Michigan Phased array system architecture
US6586702B2 (en) 1997-09-25 2003-07-01 Laser Electro Optic Application Technology Company High density pixel array and laser micro-milling method for fabricating array
US6049159A (en) 1997-10-06 2000-04-11 Albatros Technologies, Inc. Wideband acoustic transducer
FR2772590B1 (en) 1997-12-18 2000-04-14 Michel Puech USE OF AN ULTRASONIC TRANSDUCER FOR ECHOGRAPHIC EXPLORATION OF THE POSTERIOR SEGMENT OF THE EYEBALL
US6262749B1 (en) 1997-12-31 2001-07-17 Acuson Corporation Ultrasonic system and method for data transfer, storage and/or processing
US5905692A (en) 1997-12-31 1999-05-18 Analogic Corporation Digital ultrasound beamformer
FR2773459B1 (en) 1998-01-12 2000-04-14 Centre Nat Rech Scient PROCESS FOR EXPLORING AND VISUALIZING TISSUES OF HUMAN OR ANIMAL ORIGIN FROM A HIGH FREQUENCY ULTRASONIC SENSOR
JP4272353B2 (en) 1998-01-28 2009-06-03 シン フィルム エレクトロニクス エイエスエイ Method for generating a three-dimensional conductive structure and / or semiconductive structure, a method for erasing the structure, and an electric field generator / modulator used with the method
US5977691A (en) 1998-02-10 1999-11-02 Hewlett-Packard Company Element interconnections for multiple aperture transducers
JP3345580B2 (en) 1998-03-05 2002-11-18 株式会社東芝 Ultrasonic probe manufacturing method
US6183578B1 (en) 1998-04-21 2001-02-06 Penn State Research Foundation Method for manufacture of high frequency ultrasound transducers
US6547731B1 (en) 1998-05-05 2003-04-15 Cornell Research Foundation, Inc. Method for assessing blood flow and apparatus thereof
US5970025A (en) 1998-06-10 1999-10-19 Acuson Corporation Ultrasound beamformation integrated circuit and method
JP2000050391A (en) 1998-07-31 2000-02-18 Olympus Optical Co Ltd Ultrasonic transducer and its manufacture
US6001062A (en) 1998-08-03 1999-12-14 Scimed Life Systems, Inc. Slewing bandpass filter for selective passage of time varying acoustic signals
AU1128600A (en) 1998-11-20 2000-06-13 Joie P. Jones Methods for selectively dissolving and removing materials using ultra-high frequency ultrasound
US6193662B1 (en) 1999-02-17 2001-02-27 Atl Ultrasound High frame rate pulse inversion harmonic ultrasonic diagnostic imaging system
US6650264B1 (en) 1999-03-10 2003-11-18 Cirrus Logic, Inc. Quadrature sampling architecture and method for analog-to-digital converters
US6492762B1 (en) 1999-03-22 2002-12-10 Transurgical, Inc. Ultrasonic transducer, transducer array, and fabrication method
US7391872B2 (en) 1999-04-27 2008-06-24 Frank Joseph Pompei Parametric audio system
US6322505B1 (en) 1999-06-08 2001-11-27 Acuson Corporation Medical diagnostic ultrasound system and method for post processing
US20010007940A1 (en) 1999-06-21 2001-07-12 Hosheng Tu Medical device having ultrasound imaging and therapeutic means
US6235024B1 (en) 1999-06-21 2001-05-22 Hosheng Tu Catheters system having dual ablation capability
US6258034B1 (en) 1999-08-04 2001-07-10 Acuson Corporation Apodization methods and apparatus for acoustic phased array aperture for diagnostic medical ultrasound transducer
US6251073B1 (en) 1999-08-20 2001-06-26 Novasonics, Inc. Miniaturized ultrasound apparatus and method
US6497664B1 (en) 1999-09-14 2002-12-24 Ecton, Inc. Medical diagnostic ultrasound system and method
US6325759B1 (en) 1999-09-23 2001-12-04 Ultrasonix Medical Corporation Ultrasound imaging system
US6255761B1 (en) 1999-10-04 2001-07-03 The United States Of America As Represented By The Secretary Of The Navy Shaped piezoelectric composite transducer
US6806622B1 (en) 1999-10-22 2004-10-19 Materials Systems, Inc. Impact-reinforced piezocomposite transducer array
US6350238B1 (en) 1999-11-02 2002-02-26 Ge Medical Systems Global Technology Company, Llc Real-time display of ultrasound in slow motion
US6546803B1 (en) 1999-12-23 2003-04-15 Daimlerchrysler Corporation Ultrasonic array transducer
US6457365B1 (en) 2000-02-09 2002-10-01 Endosonics Corporation Method and apparatus for ultrasonic imaging
TW569424B (en) 2000-03-17 2004-01-01 Matsushita Electric Ind Co Ltd Module with embedded electric elements and the manufacturing method thereof
US6787974B2 (en) 2000-03-22 2004-09-07 Prorhythm, Inc. Ultrasound transducer unit and planar ultrasound lens
JP2003527906A (en) 2000-03-23 2003-09-24 クロス マッチ テクノロジーズ, インコーポレイテッド Piezoelectric identification device and its application
US6503204B1 (en) 2000-03-31 2003-01-07 Acuson Corporation Two-dimensional ultrasonic transducer array having transducer elements in a non-rectangular or hexagonal grid for medical diagnostic ultrasonic imaging and ultrasound imaging system using same
US6483225B1 (en) 2000-07-05 2002-11-19 Acuson Corporation Ultrasound transducer and method of manufacture thereof
JP3951091B2 (en) 2000-08-04 2007-08-01 セイコーエプソン株式会社 Manufacturing method of semiconductor device
US6679845B2 (en) 2000-08-30 2004-01-20 The Penn State Research Foundation High frequency synthetic ultrasound array incorporating an actuator
US6822374B1 (en) 2000-11-15 2004-11-23 General Electric Company Multilayer piezoelectric structure with uniform electric field
US6558323B2 (en) 2000-11-29 2003-05-06 Olympus Optical Co., Ltd. Ultrasound transducer array
CA2429940C (en) 2000-12-01 2008-07-08 The Cleveland Clinic Foundation Miniature ultrasound transducer
US6759791B2 (en) 2000-12-21 2004-07-06 Ram Hatangadi Multidimensional array and fabrication thereof
US6695783B2 (en) 2000-12-22 2004-02-24 Koninklijke Philips Electronics N.V. Multiline ultrasound beamformers
JP3849976B2 (en) 2001-01-25 2006-11-22 松下電器産業株式会社 COMPOSITE PIEZOELECTRIC, ULTRASONIC PROBE FOR ULTRASONIC DIAGNOSTIC DEVICE, ULTRASONIC DIAGNOSTIC DEVICE, AND METHOD FOR PRODUCING COMPOSITE PIEZOELECTRIC
US6490228B2 (en) 2001-02-16 2002-12-03 Koninklijke Philips Electronics N.V. Apparatus and method of forming electrical connections to an acoustic transducer
US6936009B2 (en) 2001-02-27 2005-08-30 General Electric Company Matching layer having gradient in impedance for ultrasound transducers
US6761688B1 (en) 2001-02-28 2004-07-13 Siemens Medical Solutions Usa, Inc. Multi-layered transducer array and method having identical layers
US6664717B1 (en) 2001-02-28 2003-12-16 Acuson Corporation Multi-dimensional transducer array and method with air separation
US6437487B1 (en) 2001-02-28 2002-08-20 Acuson Corporation Transducer array using multi-layered elements and a method of manufacture thereof
US6685644B2 (en) 2001-04-24 2004-02-03 Kabushiki Kaisha Toshiba Ultrasound diagnostic apparatus
FR2828056B1 (en) 2001-07-26 2004-02-27 Metal Cable MULTI-ELEMENT TRANSDUCER OPERATING AT HIGH FREQUENCIES
US6635019B2 (en) 2001-08-14 2003-10-21 Koninklijke Philips Electronics Nv Scanhead assembly for ultrasonic imaging having an integral beamformer and demountable array
US6673018B2 (en) 2001-08-31 2004-01-06 Ge Medical Systems Global Technology Company Llc Ultrasonic monitoring system and method
US6761697B2 (en) 2001-10-01 2004-07-13 L'oreal Sa Methods and systems for predicting and/or tracking changes in external body conditions
CA2406684A1 (en) 2001-10-05 2003-04-05 Queen's University At Kingston Ultrasound transducer array
US6656124B2 (en) 2001-10-15 2003-12-02 Vermon Stack based multidimensional ultrasonic transducer array
WO2003040427A1 (en) 2001-10-16 2003-05-15 Data Storage Institute Thin film deposition by laser irradiation
SG122749A1 (en) 2001-10-16 2006-06-29 Inst Data Storage Method of laser marking and apparatus therefor
CN1263173C (en) 2001-12-06 2006-07-05 松下电器产业株式会社 Composite piezoelectric body and making method thereof
US7139676B2 (en) 2002-01-18 2006-11-21 Agilent Technologies, Inc Revising a test suite using diagnostic efficacy evaluation
US6705992B2 (en) 2002-02-28 2004-03-16 Koninklijke Philips Electronics N.V. Ultrasound imaging enhancement to clinical patient monitoring functions
US20030173870A1 (en) 2002-03-12 2003-09-18 Shuh-Yueh Simon Hsu Piezoelectric ultrasound transducer assembly having internal electrodes for bandwidth enhancement and mode suppression
JP3857170B2 (en) 2002-03-29 2006-12-13 日本電波工業株式会社 Ultrasonic probe
US6784600B2 (en) 2002-05-01 2004-08-31 Koninklijke Philips Electronics N.V. Ultrasonic membrane transducer for an ultrasonic diagnostic probe
US6676606B2 (en) 2002-06-11 2004-01-13 Koninklijke Philips Electronics N.V. Ultrasonic diagnostic micro-vascular imaging
US6612989B1 (en) 2002-06-18 2003-09-02 Koninklijke Philips Electronics N.V. System and method for synchronized persistence with contrast agent imaging
US6891311B2 (en) 2002-06-27 2005-05-10 Siemens Medical Solutions Usa, Inc Ultrasound transmit pulser with receive interconnection and method of use
US6994674B2 (en) 2002-06-27 2006-02-07 Siemens Medical Solutions Usa, Inc. Multi-dimensional transducer arrays and method of manufacture
US6806623B2 (en) 2002-06-27 2004-10-19 Siemens Medical Solutions Usa, Inc. Transmit and receive isolation for ultrasound scanning and methods of use
US6875178B2 (en) 2002-06-27 2005-04-05 Siemens Medical Solutions Usa, Inc. Receive circuit for ultrasound imaging
DE10229880A1 (en) 2002-07-03 2004-01-29 Siemens Ag Image analysis method and device for image evaluation for in vivo small animal imaging
CA2492140A1 (en) 2002-07-12 2004-01-22 Iscience Surgical Corporation Ultrasound interfacing device for tissue imaging
WO2004007098A1 (en) 2002-07-15 2004-01-22 Eagle Ultrasound As High frequency and multi frequency band ultrasound transducers based on ceramic films
JP4109030B2 (en) 2002-07-19 2008-06-25 オリンパス株式会社 Biological tissue clip device
EP1616525A3 (en) 2002-07-19 2006-02-01 Aloka Co., Ltd. Ultrasonic probe
DE10236854B4 (en) 2002-08-07 2004-09-23 Samsung SDI Co., Ltd., Suwon Method and device for structuring electrodes of organic light-emitting elements
JP3906126B2 (en) 2002-08-13 2007-04-18 株式会社東芝 Ultrasonic transducer and manufacturing method thereof
US7426904B2 (en) 2002-10-10 2008-09-23 Visualsonics Inc. Small-animal mount assembly
CA2501647C (en) * 2002-10-10 2013-06-18 Visualsonics Inc. High frequency high frame-rate ultrasound imaging system
AU2003277432A1 (en) 2002-10-16 2004-05-04 Varian Medical Systems Technologies, Inc. Method and apparatus for excess signal correction in an imager
US7052462B2 (en) 2002-10-24 2006-05-30 Olympus Corporation Ultrasonic probe and ultrasonic diagnostic equipment
US6822376B2 (en) 2002-11-19 2004-11-23 General Electric Company Method for making electrical connection to ultrasonic transducer
US6740037B1 (en) 2002-12-10 2004-05-25 Myron R. Schoenfeld High Frequency ultrasonagraphy utilizing constructive interference
US6831394B2 (en) 2002-12-11 2004-12-14 General Electric Company Backing material for micromachined ultrasonic transducer devices
US7377900B2 (en) 2003-06-02 2008-05-27 Insightec - Image Guided Treatment Ltd. Endo-cavity focused ultrasound transducer
US20050039323A1 (en) 2003-08-22 2005-02-24 Simens Medical Solutions Usa, Inc. Transducers with electically conductive matching layers and methods of manufacture
EP1511092B1 (en) 2003-08-29 2007-02-21 Fuji Photo Film Co., Ltd. Laminated structure, method of manufacturing the same and ultrasonic transducer array
US7249513B1 (en) 2003-10-02 2007-07-31 Gore Enterprise Holdings, Inc. Ultrasound probe
US20050089205A1 (en) 2003-10-23 2005-04-28 Ajay Kapur Systems and methods for viewing an abnormality in different kinds of images
US7156938B2 (en) 2003-11-11 2007-01-02 General Electric Company Method for making multi-layer ceramic acoustic transducer
US7017245B2 (en) 2003-11-11 2006-03-28 General Electric Company Method for making multi-layer ceramic acoustic transducer
US7109642B2 (en) 2003-11-29 2006-09-19 Walter Guy Scott Composite piezoelectric apparatus and method
TW200520019A (en) 2003-12-12 2005-06-16 Ind Tech Res Inst Control device of substrate temperature
EP1715897B2 (en) 2004-01-20 2013-10-30 Sunnybrook and Women's College Health Sciences Centre High frequency ultrasound imaging using contrast agents
US20050203402A1 (en) 2004-02-09 2005-09-15 Angelsen Bjorn A. Digital ultrasound beam former with flexible channel and frequency range reconfiguration
US20070222339A1 (en) 2004-04-20 2007-09-27 Mark Lukacs Arrayed ultrasonic transducer
US20050251232A1 (en) 2004-05-10 2005-11-10 Hartley Craig J Apparatus and methods for monitoring heart rate and respiration rate and for monitoring and maintaining body temperature in anesthetized mammals undergoing diagnostic or surgical procedures
US7451650B2 (en) 2004-08-27 2008-11-18 General Electric Company Systems and methods for adjusting gain within an ultrasound probe
US8002708B2 (en) * 2005-01-11 2011-08-23 General Electric Company Ultrasound beamformer with scalable receiver boards
US8137280B2 (en) 2005-02-09 2012-03-20 Surf Technology As Digital ultrasound beam former with flexible channel and frequency range reconfiguration
US7798963B2 (en) 2005-03-04 2010-09-21 Visualsonics Inc. Method for synchronization of breathing signal with the capture of ultrasound data
EP1863377A4 (en) 2005-04-01 2010-11-24 Visualsonics Inc System and method for 3-d visualization of vascular structures using ultrasound
GB0518105D0 (en) 2005-09-06 2005-10-12 Plastic Logic Ltd Step-and-repeat laser ablation of electronic devices
US20070059247A1 (en) 2005-08-30 2007-03-15 Lindner Jonathan R Deposit contrast agents and related methods thereof
EP1922775B1 (en) 2005-09-06 2017-05-10 Flexenable Limited Laser ablation of electronic devices
US7946990B2 (en) 2005-09-30 2011-05-24 Siemens Medical Solutions Usa, Inc. Ultrasound color flow imaging at high frame rates
WO2007041460A2 (en) 2005-10-03 2007-04-12 Aradigm Corporation Method and system for laser machining
EP1952175B1 (en) 2005-11-02 2013-01-09 Visualsonics, Inc. Digital transmit beamformer for an arrayed ultrasound transducer system
CN101405090A (en) 2005-11-02 2009-04-08 视声公司 Arrayed ultrasonic transducer
US7603153B2 (en) 2005-12-12 2009-10-13 Sterling Investments Lc Multi-element probe array
US7750536B2 (en) 2006-03-02 2010-07-06 Visualsonics Inc. High frequency ultrasonic transducer and matching layer comprising cyanoacrylate
US20080007142A1 (en) 2006-06-23 2008-01-10 Minoru Toda Ultrasonic transducer assembly having a vibrating member and at least one reflector
US7892176B2 (en) 2007-05-02 2011-02-22 General Electric Company Monitoring or imaging system with interconnect structure for large area sensor array
US7518290B2 (en) 2007-06-19 2009-04-14 Siemens Medical Solutions Usa, Inc. Transducer array with non-uniform kerfs
US8008842B2 (en) 2007-10-26 2011-08-30 Trs Technologies, Inc. Micromachined piezoelectric ultrasound transducer arrays

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US53653A (en) 1866-04-03 Improvement in harvester-rakes
US53748A (en) 1866-04-03 Improved water-can for railroad-cars
US683168A (en) 1900-12-14 1901-09-24 William V Bleha Hat-hanger.
US683870A (en) 1901-01-17 1901-10-01 James O Wright Dredge-bucket.
US736232A (en) 1903-02-18 1903-08-11 Cambridge Mfg Company Golf-ball.
US6083164A (en) 1997-06-27 2000-07-04 Siemens Medical Systems, Inc. Ultrasound front-end circuit combining the transmitter and automatic transmit/receiver switch
US6851392B2 (en) 2002-10-10 2005-02-08 Visual Sonics Small-animal mount assembly
US7052460B2 (en) 2003-05-09 2006-05-30 Visualsonics Inc. System for producing an ultrasound image using line-based image reconstruction
US20050272183A1 (en) 2004-04-20 2005-12-08 Marc Lukacs Arrayed ultrasonic transducer
US10998605B2 (en) 2017-10-25 2021-05-04 Tesat-Spacecom Gmbh & Co. Kg Connecting unit for connecting to first and second interfaces, where the connecting unit comprises an internal conductor disposed within a housing formed by half-shell construction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BROWN J ET AL.: "A Digital Beamfonner for high-Frequency Annular Arrays", IEEE TRANS. UFFC, vol. 52, no. 8, August 2005 (2005-08-01), pages 1262 - 1269

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE46185E1 (en) 2005-11-02 2016-10-25 Fujifilm Sonosite, Inc. High frequency array ultrasound system
US7901358B2 (en) 2005-11-02 2011-03-08 Visualsonics Inc. High frequency array ultrasound system
WO2007140593A1 (en) 2006-06-02 2007-12-13 St. Michael's Hospital Ultrasonic evaluation of venous structures
US9232930B2 (en) 2006-06-02 2016-01-12 St. Michael's Hospital Ultrasonic evaluation of venous structures
US8529452B2 (en) 2006-06-02 2013-09-10 Sandra Donnelly Ultrasonic evaluation of venous structures
EP2324337A4 (en) * 2008-08-18 2012-02-22 Univ Virginia Patent Found Front end circuitry for imaging systems and methods of use
EP2324337A1 (en) * 2008-08-18 2011-05-25 University Of Virginia Patent Foundation Front end circuitry for imaging systems and methods of use
US10596597B2 (en) 2008-09-18 2020-03-24 Fujifilm Sonosite, Inc. Methods for manufacturing ultrasound transducers and other components
US12029131B2 (en) 2008-09-18 2024-07-02 Fujifilm Sonosite, Inc. Methods for patterning electrodes of ultrasound transducers and other components
US11845108B2 (en) 2008-09-18 2023-12-19 Fujifilm Sonosite, Inc. Methods for manufacturing ultrasound transducers and other components
US11094875B2 (en) 2008-09-18 2021-08-17 Fujifilm Sonosite, Inc. Methods for manufacturing ultrasound transducers and other components
US9935254B2 (en) 2008-09-18 2018-04-03 Fujifilm Sonosite, Inc. Methods for manufacturing ultrasound transducers and other components
JP5514120B2 (en) * 2008-11-14 2014-06-04 株式会社日立メディコ Ultrasonic diagnostic apparatus and ultrasonic image generation method
WO2010055819A1 (en) * 2008-11-14 2010-05-20 株式会社 日立メディコ Ultrasonographic device and method for generating ultrasonogram
CN103033807A (en) * 2011-09-30 2013-04-10 中国科学院声学研究所 Portable ultrasonic imaging system receiving front-end device
US9482748B2 (en) 2011-12-12 2016-11-01 Super Sonic Imagine Ultrasound imaging system, and a processing device used inside said ultrasound imaging system
WO2013087402A1 (en) * 2011-12-13 2013-06-20 Robert Bosch Gmbh Apparatus for detecting audible signals and associated method
DE102011088346B4 (en) 2011-12-13 2022-01-05 Robert Bosch Gmbh Device for detecting acoustic signals and the associated method
CN104169740A (en) * 2011-12-13 2014-11-26 罗伯特·博世有限公司 Apparatus for detecting audible signals and associated method
ES2525600R1 (en) * 2012-05-25 2015-01-29 Consejo Superior De Investigaciones Científicas (Csic) METHOD FOR REAL-TIME CONTROL OF DYNAMIC APPROACH IN ULTRASONIC IMAGE SYSTEMS AND DEVICE SAMPLING ADVANCED CALCULATOR ASSOCIATED WITH THE SAME
CN103826541A (en) * 2012-07-31 2014-05-28 株式会社东芝 Ultrasonic diagnostic device and control method
US10772483B2 (en) 2015-08-07 2020-09-15 Olympus Corporation Imaging apparatus
EP3518781A4 (en) * 2016-09-28 2020-06-17 Covidien LP System and method for parallelization of cpu and gpu processing for ultrasound imaging devices
WO2018087400A1 (en) 2016-11-14 2018-05-17 Koninklijke Philips N.V. Triple mode ultrasound imaging for anatomical, functional, and hemodynamical imaging
US11715454B2 (en) 2018-08-16 2023-08-01 Samsung Medison Co. Ltd. Beamforming device, method of controlling the same, and ultrasound diagnostic apparatus

Also Published As

Publication number Publication date
USRE46185E1 (en) 2016-10-25
JP2017035528A (en) 2017-02-16
HK1129243A1 (en) 2009-11-20
JP2014210201A (en) 2014-11-13
WO2007056104A3 (en) 2007-08-30
US20070239001A1 (en) 2007-10-11
JP5630958B2 (en) 2014-11-26
CA2935422C (en) 2019-01-08
ES2402741T3 (en) 2013-05-08
CA2935422A1 (en) 2007-05-18
JP2009514600A (en) 2009-04-09
JP2014000465A (en) 2014-01-09
EP1952175B1 (en) 2013-01-09
CA2628100C (en) 2016-08-23
EP1952175A2 (en) 2008-08-06
US7901358B2 (en) 2011-03-08
WO2007056104A9 (en) 2007-07-12
CN101351724B (en) 2013-03-20
JP5690900B2 (en) 2015-03-25
CA2628100A1 (en) 2007-05-18
CN101351724A (en) 2009-01-21

Similar Documents

Publication Publication Date Title
USRE46185E1 (en) High frequency array ultrasound system
US5904652A (en) Ultrasound scan conversion with spatial dithering
US6752763B2 (en) Orthogonally reconfigurable integrated matrix acoustical array
Tortoli et al. ULA-OP: An advanced open platform for ultrasound research
JP5847719B2 (en) Ultrasonic three-dimensional image forming system
TW381226B (en) Portable ultrasound imaging system
US6595921B1 (en) Medical diagnostic ultrasound imaging system and method for constructing a composite ultrasound image
JP5496865B2 (en) Portable ultrasound imaging system
JPH0856944A (en) Ultrasonic wave beam former
US9254118B2 (en) Floating transducer drive, system employing the same and method of operating
JP2008229096A (en) Ultrasonic diagnostic apparatus
JP2018502654A (en) System and method for beamforming using variable sampling
US20090076394A1 (en) High-frequency tissue imaging devices and methods
WO2003000137A1 (en) Orthogonally reconfigurable integrated matrix acoustical array
Campbell et al. An Ultrafast High-Frequency Hardware Beamformer for a Phased Array Endoscope
JP2023531979A (en) Ultrasound transducer probe based analog-to-digital conversion for continuous wave Doppler and related apparatus, systems and methods
JP2001137220A (en) Medical imaging apparatus
Bera Fast volumetric imaging using a matrix TEE probe with partitioned transmit-receive array
AU8361801A (en) Ultrasound scan conversion with spatial dithering

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2628100

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2008539044

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2006827417

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200680050246.3

Country of ref document: CN