US20050140682A1 - Graphics processing unit for simulation or medical diagnostic imaging - Google Patents
Graphics processing unit for simulation or medical diagnostic imaging Download PDFInfo
- Publication number
- US20050140682A1 US20050140682A1 US11/060,046 US6004605A US2005140682A1 US 20050140682 A1 US20050140682 A1 US 20050140682A1 US 6004605 A US6004605 A US 6004605A US 2005140682 A1 US2005140682 A1 US 2005140682A1
- Authority
- US
- United States
- Prior art keywords
- data
- processing unit
- memory
- graphics
- gpu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- the present invention relates to a graphics processing unit. Loading and processing data with the graphics processing unit (GPU) is controlled.
- GPU graphics processing unit
- GPUs are provided as videocards on personal computers.
- a central processing unit CPU coordinates the transfer of data from a random access memory to the GPU for video rendering.
- a memory control hub is connected by various buses to each of a source, a RAM, the CPU and the GPU.
- an AGP chip set is used as a memory control hub.
- the memory control hub controls the data transfers between any of the various interconnected devices.
- the data is obtained from the source, such as a CD, diskette, or hard drive.
- the data from the source is routed to a random access memory (RAM).
- the CPU then copies the data from the random access memory into the CPU cache memory.
- the CPU copies the data to a graphics aperture region of the RAM controlled pursuant to a graphics aperture resource table (GART). Prior to copying the data to the graphics aperture region, the CPU may also reformat the data. This is because the GPU expects the data to be in a particular format in order to deliver maximum throughput.
- the data from the graphics aperture region is then transferred through an accelerated graphics port (AGP) to the video memory of the GPU.
- AGP accelerated graphics port
- the GPU then performs various rendering or video processing and outputs a resulting image to a display.
- API application programming interface
- the CPU controls operation of the GPU.
- the CPU copies the data from the RAM to the graphics aperture region of the RAM, the data is copied multiple times. Any loading on the CPU for other processing may delay the transfer of data to the GPU. Since the CPU operations may be interrupted, the transfer of data to the GPU is inconsistent or non-deterministic.
- the AGP hardware discussed above may be used in a medical diagnostic ultrasound system, such as disclosed in U.S. Pat. No. 6,358,204, the disclosure of which is incorporated herein by reference.
- the memory control hub connects a CPU to a memory.
- the two other ports of the memory control hub are connected to two different buses, one a system bus and another an ultrasound data bus.
- the ultrasound data bus connects to a source of ultrasound data as well as outputs of ultrasound data, such a scan converter connected with the display.
- data to be processed by a graphics processing unit is transferred from a source to the graphics processing unit without copying by the central processing unit.
- the central processing unit does not copy data to the cache.
- the source of data transfers the data directly to the graphics processing unit or directly to a graphics aperture region of a memory for transfer to the video memory of the GPU.
- the GPU is then used to generate a two-dimensional or three-dimensional image.
- the GPU is used to perform a medical imaging process, such as an ultrasound imaging process.
- the processed data is transferred to a different processor. Since the GPU provides various parallel processors, the GPU may more efficiently perform data processing different from rendering a two-dimensional or three-dimensional image.
- a graphics processing unit system for diagnostic medical ultrasound imaging.
- a graphics processing unit has an input and at least one output.
- the graphics processing unit is operable to process ultrasound data from the input.
- a processor connects with the output of the graphics processing unit.
- the processor is operable to process ultrasound data output on the output of the graphics processing unit.
- a method for diagnostic medical ultrasound imaging with a graphics processing unit is provided.
- Ultrasound data is processed with the graphics processing unit.
- Ultrasound data output from the graphics processing unit is then further processed with a different processor prior to generating a display responsive to the ultrasound data.
- a display responsive to the ultrasound data is then generated.
- an improvement in a method for loading a video memory of a graphics processing unit is provided.
- a central processing unit interacts with a memory, such as a RAM memory, and the graphics processing unit.
- data is loaded into the video memory without storing the data in a cache of the central processing unit.
- a system for loading a video memory of a graphics processing unit connects with the graphics processing unit.
- the central processing unit is operable to run an application programming interface of the graphics processing unit.
- a source of data connects with the graphics processing unit. The data is transferable from the source to the video memory without copying of the data by the central processing unit.
- FIG. 1 is a block diagram of one embodiment of a system for loading a video memory of a graphics processing unit
- FIG. 2 is a flow chart diagram of one embodiment of a method for loading a video memory of a graphics processing unit
- FIG. 3 is a block diagram of one embodiment of a graphics processing unit and interconnected processor
- FIG. 4 is a flow chart diagram of one embodiment of a method for processing in diagnostic medical ultrasound data with a graphics processing unit.
- the routing of data for loading into a video memory of a GPU is controlled.
- the GPU performs image processing different than two- or three-dimensional rendering of an image.
- the GPU performs general mathematical computations.
- the GPU performs two or three-dimensional renderings of an image.
- a combination of the two aspects discussed above is provided. The different aspects may be used independently or separately in other embodiments. Immediately below, embodiments directed to loading data into the video memory are provided. Subsequently, embodiments directed to performing different processes with the GPU are provided.
- FIG. 1 shows one embodiment of a system 10 for loading a video memory 12 of a GPU 14 .
- a memory control hub 16 interconnects the GPU 14 with the CPU 18 , a memory 20 and a source of data 22 . Additional, different or fewer components may be provided.
- the GPU 14 connects to the source 22 without one or more of the memory control hub 16 , the CPU 18 and the memory 20 .
- an additional component connects to the memory control hub 16 .
- the system 10 is a system configured pursuant to the AGP specification, but may be configured pursuant to different specifications, such as PCI, PCI-X, PCI Express, or arrangements with or without any of the various components.
- the system 10 is a personal computer for generating graphical images, such as simulations.
- the system 10 may also be used as a work station for generating graphical images from data representing an object, such as a scanned picture.
- the system 10 is a medical imaging system, such as an X-ray, MRI, computer tomography, diagnostic ultrasound or other now known or later developed medical imaging system.
- the GPU 14 is a processor, circuit, application specific integrated circuit, digital signal processor, video card, combinations thereof or other now known or later developed device for graphics processing.
- the GPU 14 is a graphics processor or video card provided by nVIDIA, ATI or Matrox. These or other devices using an API of OpenGL, DirectX or other now known or later developed APIs may be used.
- the GPU 14 includes one or more vertex processors, such as 16 vertex processors, and one or more fragment processors, such as 64 fragment processing units. Other analog or digital devices may also be included, such as rasterization and interpolation circuits.
- One or more frame buffers may be provided for outputting data to a display.
- the GPU 14 receives data in one or more formats and generates 2 or 3 dimensional images based on the data, such as by performing texture mapping or other 2 or 3 dimensional rendering. For example, the data received represents various objects with associated spatial relationships.
- the GPU 14 is operable to determine the relative positioning of the data and generate fragments representing data visible from a particular viewing direction. GPU 14 is operable to decompress data, so that the bandwidth of data transferred to the GPU 14 is maximized through compression. Alternatively, uncompressed data is transferred to the GPU 14 .
- the GPU 14 includes the video memory 12 .
- the video memory 12 is a random access memory, but other now known or later developed memories may be used.
- the video memory 12 stores any of various amounts of information, such as 64, 128, 256 or other number of kilobytes.
- the GPU 14 accesses information from the video memory 12 for graphics processing. Graphics processing is performed pursuant to the API run by the CPU 18 .
- the CPU 18 is a general processor, application specific integrated circuit, dedicated processor, digital signal processor, digital circuit, analog circuit, combinations thereof or other now know or later developed processing device.
- the central processing unit 18 is a processor operable to control a system pursuant to the AGP specification. In alternative embodiments, processors operating pursuant to the same or different specifications may be provided.
- the CPU 18 is configured in a parallel processing arrangement, such as including two or more processors for controlling or processing data. Any various or now known or later developed processors may be used.
- the CPU 18 connects with the GPU 14 for running an application programming interface of the GPU 14 .
- the CPU 18 provides instructions pursuant to the API for controlling the graphics rendering.
- the CPU 18 implements a driver for the GPU 14 operable to accept pre-formatted data without processing by the CPU 18 .
- the CPU 18 also controls the memory control hub 16 and associated memory 20 .
- the CPU 18 controls or processes data from the source 22 .
- the source 22 operates independently of the CPU 18 .
- the memory 20 is a random access memory, such as arranged in one, two or more different chips or chip sets. Other now known or later developed memories may be used.
- the memory 20 is connected with the CPU 18 , such as through the memory control hub 16 for allowing the CPU 18 access to the memory 20 .
- the memory 20 is controlled by the CPU 18 .
- the memory 20 has a common address scheme accessible by the memory control hub 16 or the CPU 18 .
- a section or group of addresses of the memory 20 is assigned as a graphics aperture region.
- the addresses associated with the graphics aperture region identify addresses for data to be transferred to the video memory 12 .
- the graphics aperture region is generally not accessible for uses other than transfer of data to the GPU 14 .
- the size of the graphics aperture region matches the size of the video memory 12 .
- the graphics aperture region is accessible for other uses or may be a different size than the video memory 12 .
- the GPU 14 , the memory control hub 16 or the CPU 18 causes data stored in the graphics aperture region to be transferred or copied to the video memory 12 .
- the graphics aperture region and common address scheme are configured as part of a contiguous memory controlled by a Graphics Address Re-mapping Table (GART) for controlling access to the memory 20 .
- GART Graphics Address Re-mapping Table
- the graphics aperture region is operable to slide or change memory addresses in an address loop.
- the addresses of the graphics aperture region slide within the memory 20 such that the start and end memory locations of the graphics aperture region can be incremented or decremented within the region.
- the address is shifted to the opposing lower or upper end respectively in a circular fashion.
- a memory loop of graphics data is provided within the graphics aperture region.
- the memory 20 is divided up into separately accessible sections or includes multiple devices that are separately controlled and accessed.
- the source 22 is a database, memory, sensor, CD, disk drive, hard drive, tape, tape reader, modem, computer network, or other now known or later developed source of graphics data.
- the source 22 is a program or software operated by the CPU 18 to generate a graphics simulation.
- the source of data 22 is a medical sensor, such as an x-ray, MRI, computer tomography or medical diagnostic ultrasound scanner. Medical diagnostic imaging data is provided by the source 22 .
- the source 22 is an ultrasound beamformer operable to receive acoustic echoes representing a patient. Ultrasound data includes in-phase and quadrature data from a beamformer and Spectral Doppler data.
- the ultrasound beamformer generates one or more data samples representing different spatial locations along a plurality of beams of a scan of a patient. Frames of data representing each two- or three-dimensional region are then output.
- the ultrasound data is detected using any of various detectors, such as B-mode, Doppler, harmonic, contrast agent or other now known or later developed detectors.
- the ultrasound beamformer provides the data prior to beamforming, ultrasound data prior to detection, or ultrasound data after detection.
- the source of data 22 is connected with the GPU 14 either directly or through one or more devices as shown in FIG. 1 .
- the source 22 includes a processor, a buffer, or formatter for configuring the data.
- a buffer and processor are used with an ultrasound beamformer for GPU-specific formatting of texture data acquired in a polar coordinate format into three-dimensional texture rendering by the GPU 14 .
- the GPU 14 uses a format for three-dimensional texturing to optimize memory access speeds. The data is arranged in an order to provide a GPU-specific format for data transfer.
- the GPU 14 includes a buffer, processor or formatter for GPU-specific formatting of the data. For 3D texture or other data, different formats, may be used for the data provided from the source 22 or for the data used by the GPU 14 .
- the source 22 is operable to provide data representing a three-dimensional volume.
- an ultrasound medical sensor and associated beamformer are operable to scan a patient in a three-dimensional or volume region. The scan is performed using a mechanically-moved or multi-dimensional transducer array to scan a volume by firing a plurality of ultrasound lines. Ultrasound data is then provided in sets of data representing a three-dimensional volume. More than one set may represent the same volume at a same time, such as providing a Doppler set and a B-mode set of data. For four-dimensional imaging, a plurality of sets representing the volume at different times is provided. As another example, two sets of data, processed differently, are used. Processing includes filtering.
- Spatial, frequency or other filtering may be provided for processing the data.
- One processed set of data is used for three-dimensional volume rendering.
- the other processed set of data is used for generating two-dimensional representations or slices of the volume.
- one processed set of data is overwritten as each set is acquired to conserve memory space.
- the other processed set of data is maintained throughout a time period for later processing, three-dimensional rendering or two-dimensional imaging.
- both processed data sets are stored representing volumes at multiple times.
- sets of data are maintained until the extent of the graphics aperture region has been used. The addresses are then looped back to data representing the earliest set, and the more recently acquired data is overwritten in a CINE loop fashion.
- the memory control hub 16 is a processor, a bus, an application specific integrated circuit, an AGP chip set, an AGP controller, combination thereof or other now known or later developed device for interfacing between two or more of the GPU 14 , CPU 18 , memory 20 and the source 22 .
- a single device is provided for the interface, such as a single circuit or chip, but multiple devices may be provided in any of various possible architectures for transferring data between any two or more of the devices connected with the memory control hub 16 .
- the various devices directly connect through a single data bus or with each other without the memory control hub 16 .
- Memory control hub 16 connects with the GPU 14 with an accelerated graphics bus, connects with the CPU 18 with a host or front side bus, connects with the memory 20 with a memory bus, and connects with the source 22 with a PCI-X or PCI acquisition bus.
- Different buses or signal lines may be used for any of the various connections discussed above, including now known or later developed connections.
- the data is routed from the source 22 to the graphics aperture region or to the video memory 12 without copying or loading of the data by the CPU 18 .
- the data from the source 22 is routed using a driver, software or other control implemented by the memory control hub 16 , the CPU 18 , the GPU 14 or another device.
- the data from the source 22 is operable to route to the video memory 12 through the memory control hub 16 without passing to the CPU 18 .
- the data for processing by the GPU 14 is not stored in the cache memory of the CPU 18 .
- the data from the source 22 is operable to be routed to the video memory 12 from the source 22 through the graphics aperture region of the memory 20 without passing to the CPU 18 .
- the data is written to the memory 20 directly into the graphics aperture region for transfer or copying to the video memory 12 by the memory control hub 16 .
- the data is operable to be routed to the video memory 12 from the source 22 without passing to the CPU 18 or the associated memory 20 .
- the memory 20 including the graphics aperture region, is avoided by directly routing the data from the source 22 to the video memory 12 .
- FIG. 2 shows a method for loading a video memory of a graphics processing unit using the system 10 of FIG. 1 or another system.
- the CPU interacts with a memory separate from the GPU.
- An improvement is provided by loading data into the video memory without storing the data in a cache of the CPU in act 24 .
- the CPU does not load the data from the source for the GPU into the cache of the CPU.
- the data is provided to the GPU 14 without copying by the CPU, such as without copying from one location of the memory 20 to a graphics aperture region of the memory.
- the CPU In response to user selections or otherwise configuring the system, the CPU begins an application program using the GPU. For example, a user selects three-dimensional or four-dimensional imaging. The CPU then instructs the GPU to become the bus-master and download data from the graphics aperture region or the source. The video memory is loaded without processing the data by the CPU during the transfer. The data is transferred without GPU-specific formatting (e.g., swizzling) or copying by the CPU. The CPU performs control aspects of the data transfer by signaling the GPU and/or other devices.
- a user selects three-dimensional or four-dimensional imaging.
- the CPU then instructs the GPU to become the bus-master and download data from the graphics aperture region or the source.
- the video memory is loaded without processing the data by the CPU during the transfer.
- the data is transferred without GPU-specific formatting (e.g., swizzling) or copying by the CPU.
- the CPU performs control aspects of the data transfer by signaling the GPU and/or other devices.
- the data is transferred to a graphics aperture region of the memory associated with the CPU, such as a RAM.
- a graphics aperture region of the memory associated with the CPU such as a RAM.
- ultrasound data is written from a beamformer or other medical sensor into the graphics aperture region of the memory of FIG. 1 .
- the source writes the data directly into the graphics aperture region.
- the data is formatted for use by the GPU and output from the source.
- the GPU-specific formatting by the CPU for three-dimensional texture data is avoided.
- the source performs any GPU-specific formatting.
- the data is provided to the graphics aperture region without a particular format for the GPU.
- the GPU-specific formatting is performed by the GPU after transfer to the GPU.
- the data written into the graphics aperture region is transferred to the video memory without processing or copying of the data by the CPU.
- the GPU acquires control of the bus or a portion of the bus connected with the memory having the graphics aperture region (i.e., GPU bus-masters.).
- the GPU then downloads the data from the graphics aperture region into the video memory.
- the CPU, the memory control hub, the source or another device controls one or more buses to cause the transfer of the data to the graphics aperture region. While the CPU is operable to run an application programming interface for controlling the GPU, the CPU operates free of copying data between different locations of the memory for transfer to the video memory.
- the source writes the data to the graphics aperture region where the graphics aperture region slides by using an address loop as discussed above. All or a portion of the graphics aperture region uses the looping address structure to allow one type of data or all of the data to be configured in a loop fashion for representing a volume or area at different times.
- the source writes the data to the video memory in act 30 , such as transferring the data to the video memory without storing the data in a graphics aperture region.
- the GPU controls the transfer or otherwise acquires the data from the source.
- the source, the memory control hub, or the CPU controls one or more buses for causing the data to be written to the video memory.
- some data output by the source is directed to the video memory without transfer to the graphics aperture region while other data is transferred to the video memory through the graphics aperture region.
- a subset of data may be copied by the CPU, stored in the cache of the CPU or otherwise processed by the CPU as part of the transfer to the video memory. CINE-buffering or otherwise providing storage of different representations of the same volume at different times is provided in the video memory or as a function of the timing of the output of data from the source.
- the data is formatted for the GPU without processing by the CPU. Any of various formats may be provided.
- the formatting includes compression of the data prior to the transfer to the video memory. After the transfer to the video memory, the data is decompressed with the GPU. Any of lossy or lossless compression schemes now known or later developed may be used, such as texture compression.
- the transfer speed between the source and the GPU may be increased.
- Increased transfer speed may allow for increased volume rendering rates from three-dimensional or four-dimensional imaging. Any interrupts or other processing performed by the CPU may not delay the transfer of data to the GPU. Windows or other operating system latencies may have no or minimal affect on the volume rendering by the GPU.
- Increased volume rendering rates due to increased data transfer rates may allow for four-dimensional cardiology volume rendering. Overlapping pipeline sequences for transferring data or other operations to increase parallel transfers of data may also increase transfer rates.
- FIG. 3 shows a graphics processing unit system 32 for diagnostic medical ultrasound imaging.
- the system 32 includes a GPU 14 in the configuration discussed above for FIG. 1 , in a configuration disclosed in any one of U.S. Pat. Nos. ______, and ______ (application Ser. Nos. 10/644,363, and 10/388,128), the disclosures of which are incorporated herein by reference, or other GPUs provided any where in medical diagnostic ultrasound systems (e.g., a system with an ultrasound transducer or a workstation for processing ultrasound data).
- medical diagnostic ultrasound systems e.g., a system with an ultrasound transducer or a workstation for processing ultrasound data.
- the GPU 34 includes a programmable vertex processor 36 , a primitive assembly processor 38 , a rasterization and interpolation processor 40 , a programmable fragment processor 42 and a frame buffer 44 . Additional, different or fewer components may be provided. Any of the processors of the GPU 34 are general processors, digital circuits, analog circuits, application specific integrated circuits, digital processors, graphics accelerator card, display card or other devices now known or later developed. In one embodiment, the GPU 34 is implemented as a series of discreet devices on a mother board or as a daughter board, but may be implemented as a single chip, a circuit on a card or other layout.
- the programmable vertex processor 36 is a group of 16 parallel processing units in one embodiment, but fewer or greater number of processors may be provided.
- the fragment processor 42 is a parallel arrangement of 64 processing units in one embodiment, but more or fewer processing units may be provided.
- FIG. 3 shows the graphics processing pipeline standardized by APIs such as OpenGL and DirectX.
- the GPU 34 includes a programmable vertex processor 36 , a primitive assembly 38 , a rasterization and interpolation block 40 , a programmable fragment processor 42 and a frame-buffer 44 .
- the input to the vertex processor 36 is a set of vertices in two- or three-dimensional space. Each vertex has a set of attributes such as coordinates, color, texture coordinates, etc.
- the vertex processor 36 transforms the coordinates of the vertices into a frame of reference.
- the output of the vertex processor 36 is a set of vertices with new attributes changed by the vertex processor 36 .
- vertices are fed into the next stage, the primitive assembly 38 .
- the vertices are grouped together to form points, lines and triangles.
- These primitives are then fed into the rasterization and interpolation stage 40 .
- This stage rasterizes each primitive, such as points, lines and triangles, into a set of fragments.
- a fragment is a pixel with a depth associated with it and is located on a primitive.
- the fragments have attributes such as color, coordinates and texture coordinates, etc.
- the next stage, programmable fragment processor 42 takes in these fragments, applies various processes on them, and creates pixels.
- the pixels have attributes, such as color, and are written into the final stage, the frame-buffer 44 .
- the blocks shown in FIG. 3 are high level blocks. Each block contains many other finer processing stages.
- the rasterization and interpolation stage 40 can contain such operations such as Scissor Test, Alpha Test, Stencil Test, Depth Test, etc.
- the frame buffer 44 is a memory, buffer or other device for receiving the pixels from the fragment processor 42 for display on the display 46 .
- the GPU 34 is operable to receive graphics data and generate a display on the display 46 from the graphics data.
- the process is performed pursuant to an application programming interface, such as GDI, GDI+, DirectX, OpenGL, or other APIs now know or later developed.
- the GPU 34 is used to process ultrasound data for other purposes than this immediate display. For example, in-phase and quadrature data, post detection data, log compressed data, scan converted or any other ultrasonic data is input to the GPU 34 .
- the ultrasound data is processed.
- OpenGL, DirectX extensions or other programming languages, such as Cg shader language program the GPU 34 to process ultrasound data.
- HLSL Stanford's high-level shader language or other now known or later developed shader languages may also be used.
- Some resource intensive computations are performed by the GPU 34 rather than another processor, such as a CPU, DSP, ASIC or FPGA. Since the GPU 34 functions as a computational engine, one or more additional outputs are provided. For example, an output is provided downstream of the programmable vertex processor 36 but upstream of the fragment processor 42 . As an alternative or additional output, an output is provided after the fragment processor 42 . Alternatively, the output from the frame buffer is used.
- Either or both of the vertex processor 36 and fragment processor 42 are programmed to perform ultrasound data processing.
- the vertex processor 36 is programmed or operable to perform scan conversion operations. Using the vector or matrix type polar coordinate data, the vertex processor reformats each special location into a format appropriate for a display.
- the fragment processor 42 is operable to perform Fourier transforms or non-linear scan conversion operations. Scan converted ultrasound data output by the vertex processor 36 is provided to the programmable fragment processor 42 for non-linear operations through interpolation or other fragment processes.
- the GPU 34 is operable to process ultrasound data and provide the processed data to a different image processor 48 .
- the image processor 48 provides data for the display 46 or routes data back to the GPU 34 for rendering to the display 46 .
- the processor 48 is a general processor, applications specific integrated circuit, digital signal processor, image processor, FPGA, CPU, analog circuit, digital circuit, combinations thereof or other now known or later developed device for processing ultrasound data.
- the processor 48 is operable to process ultrasound data output by the GPU 34 .
- the processor 48 and GPU 34 are provided as part of an ultrasound data path beginning at a beamformer and ending at the display 46 .
- the GPU 34 implements at least a part of one ultrasound process, such as receive beamformation, scan conversion, motion detection, other ultrasound process or combinations thereof.
- the processor 48 implements at least part of a same or different ultrasound process, such as detection, motion tracking, beamforming, filtering, scan conversion, other ultrasound process, or combinations thereof.
- the vertex processor 36 and the fragment processor 42 have independent instruction sets assembled by the shader language or other programming interface. Ultrasonic data sent to the GPU 34 is processed by the vertex processor and/or fragment processor 42 for implementing ultrasound imaging processes.
- the GPU 34 may be less likely to be interrupted than a central processing unit or other processors, the GPU 34 may provide more consist or reliable image processing. While a clock rate is lower or higher, even lower clock rates may provide faster image processing given the parallel processing provided the GPU 34 .
- the GPU 34 is capable of carrying out a large number of floating point parallel computations.
- FIG. 4 shows a method for diagnostic medical ultrasound imaging with a graphics processing unit.
- a GPU processes ultrasound data. For example, at least a part of an ultrasound process of receive beamformation, scan conversion, motion detection or other ultrasound processes are performed by the GPU.
- the programmable vertex processor 36 and/or programmable fragment processor 42 are used to perform the ultrasound process.
- GPUs have optimized architectures for vector and matrix data types. Vector and matrix processes are used for ultrasonic data processing, such as receive beamformation, scan conversion, motion tracking or correlation processes.
- receive beamformation is performed by combining data from a plurality of different channels, such as 128 channels over time.
- the GPU alters the relative position along a temporal dimension of the data across the channels with the vertex processor, and weights the data and combines the data associated with a particular location across the 128 or other number of channels with the fragment processor.
- a fast Fourier transform and an inverse Fourier transformed are used for receive beamformation.
- the vertex processor passes the ultrasound data to the fragment processor.
- Fragment processor identifies a fragment value, finds a neighbor, and combines the fragments with a weight.
- an iterative process is implemented to provide the fast Fourier transform.
- the data is then exported for further receive beamformation processing by the processor 48 .
- the GPU 34 performs the further process.
- the GPU 34 is then used to apply an inverse fast Fourier transform.
- the inverse data represents beamformed data or plurality of samples representing different locations within a patient.
- the Fourier and inverse Fourier transforms implemented by the GPU 34 are described in U.S. Pat. No. ______ (Application Ser. No. ______) (Attorney reference no. 2001P20912US)), the disclosure which is incorporated herein by reference.
- the vertex processor reformats data from a polar coordinate system unto a Cartesian coordinate system.
- a scan conversion is implemented by assigning coordinates associated with current data as a function of the new format.
- a linear interpolation by the rasterization and interpolation processor completes the scan conversion.
- the fragment processor 42 implements the non-linear function.
- motion tracking or motion detection using correlation or other processes is performed by the GPU 34 .
- Any of the vertex processor or fragment processor may be used for implementing correlation functions, such as cross correlation or minimum sum of absolute differences.
- Other ultrasound processes may be performed with the GPU 34 .
- any of the various outputs to the image processor 48 may be used.
- the ultrasound data output from the GPU 34 is processed with a different processor, the additional processing is performed prior to generating a display responsive to the data input to the GPU 34 .
- the processor 48 implements filtering, detection, scan conversion, beamformation, motion detection or another ultrasound process.
- the data output by the processor 48 is provided to the display 46 either through or without passing through the GPU 34 .
- the GPU 34 processes data for a three dimensional representation using an ultrasound process.
- the data is then provided to the processor 48 and/or additional devices for further ultrasound processing, such as filtering.
- the ultrasound data is then provided back to the GPU 34 for graphics processing and output to the display 46 .
- the ultrasound data is output to the display.
- the display is responsive to the ultrasound data processed one or more times by the GPU 34 and another or different processor.
- the GPU has multiple programmable processors while being relatively cheap.
- the large parallel processing capability is less susceptible to interrupts than processors operating pursuant to an operating system.
- the GPU is programmed to perform an ultrasound process.
- the GPU implements only graphics processing or may implement the ultrasound processing as well as graphics processing.
- the GPU for implementing ultrasound processing is provided in a system different than described above for FIG. 1 .
- the system described for FIG. 1 uses the GPU for graphics processing or other volume rendering of ultrasound or non-ultrasound data without ultrasound processing by the GPU.
- the drivers or software may be adapted for use with reprogrammable processors or GPU, such as provided by reprogramming an FPGA during use or by service personal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Methods and systems provide simulation or medical diagnostic imaging with a graphics processing unit. Data to be processed by a graphics processing unit is transferred from a source to the graphics processing unit without copying by the central processing unit. For example, the central processing unit does not copy data to the cache. Instead, the source of data transfers the data directly to the graphics processing unit or directly to a graphics aperture region of a memory for transfer to the video memory of the GPU. The GPU is then used to generate a two-dimensional or three-dimensional image. The GPU is used to perform a medical imaging process, such as an ultrasound imaging process. The processed data is transferred to a different processor. Since the GPU provides various parallel processors, the GPU may more efficiently perform image processes different from rendering a two-dimensional or three-dimensional image.
Description
- The present invention relates to a graphics processing unit. Loading and processing data with the graphics processing unit (GPU) is controlled.
- GPUs are provided as videocards on personal computers. Using the AGP specification, a central processing unit (CPU) coordinates the transfer of data from a random access memory to the GPU for video rendering. A memory control hub is connected by various buses to each of a source, a RAM, the CPU and the GPU. For example, an AGP chip set is used as a memory control hub. The memory control hub controls the data transfers between any of the various interconnected devices. The data is obtained from the source, such as a CD, diskette, or hard drive. The data from the source is routed to a random access memory (RAM). The CPU then copies the data from the random access memory into the CPU cache memory. For use of the GPU, the CPU copies the data to a graphics aperture region of the RAM controlled pursuant to a graphics aperture resource table (GART). Prior to copying the data to the graphics aperture region, the CPU may also reformat the data. This is because the GPU expects the data to be in a particular format in order to deliver maximum throughput. The data from the graphics aperture region is then transferred through an accelerated graphics port (AGP) to the video memory of the GPU. The GPU then performs various rendering or video processing and outputs a resulting image to a display. Pursuant to an application programming interface (API), the CPU controls operation of the GPU.
- Since the CPU copies the data from the RAM to the graphics aperture region of the RAM, the data is copied multiple times. Any loading on the CPU for other processing may delay the transfer of data to the GPU. Since the CPU operations may be interrupted, the transfer of data to the GPU is inconsistent or non-deterministic.
- The AGP hardware discussed above may be used in a medical diagnostic ultrasound system, such as disclosed in U.S. Pat. No. 6,358,204, the disclosure of which is incorporated herein by reference. The memory control hub connects a CPU to a memory. The two other ports of the memory control hub are connected to two different buses, one a system bus and another an ultrasound data bus. The ultrasound data bus connects to a source of ultrasound data as well as outputs of ultrasound data, such a scan converter connected with the display.
- By way of introduction, the preferred embodiments described below include methods and systems for simulation or medical diagnostic imaging with a graphics processing unit. In one embodiment, data to be processed by a graphics processing unit is transferred from a source to the graphics processing unit without copying by the central processing unit. For example, the central processing unit does not copy data to the cache. Instead, the source of data transfers the data directly to the graphics processing unit or directly to a graphics aperture region of a memory for transfer to the video memory of the GPU. The GPU is then used to generate a two-dimensional or three-dimensional image.
- In another embodiment, the GPU is used to perform a medical imaging process, such as an ultrasound imaging process. The processed data is transferred to a different processor. Since the GPU provides various parallel processors, the GPU may more efficiently perform data processing different from rendering a two-dimensional or three-dimensional image.
- In a first aspect, a graphics processing unit system is provided for diagnostic medical ultrasound imaging. A graphics processing unit has an input and at least one output. The graphics processing unit is operable to process ultrasound data from the input. A processor connects with the output of the graphics processing unit. The processor is operable to process ultrasound data output on the output of the graphics processing unit.
- In a second aspect, a method for diagnostic medical ultrasound imaging with a graphics processing unit is provided. Ultrasound data is processed with the graphics processing unit. Ultrasound data output from the graphics processing unit is then further processed with a different processor prior to generating a display responsive to the ultrasound data. A display responsive to the ultrasound data is then generated.
- In a third aspect, an improvement in a method for loading a video memory of a graphics processing unit is provided. A central processing unit interacts with a memory, such as a RAM memory, and the graphics processing unit. In the improvement, data is loaded into the video memory without storing the data in a cache of the central processing unit.
- In a fourth aspect, a system for loading a video memory of a graphics processing unit is provided. A central processing unit connects with the graphics processing unit. The central processing unit is operable to run an application programming interface of the graphics processing unit. A source of data connects with the graphics processing unit. The data is transferable from the source to the video memory without copying of the data by the central processing unit.
- The present invention is defined by the claims, and nothing in this section should be taken as a limitation on those claims. Further aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments.
- The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
-
FIG. 1 is a block diagram of one embodiment of a system for loading a video memory of a graphics processing unit; -
FIG. 2 is a flow chart diagram of one embodiment of a method for loading a video memory of a graphics processing unit; -
FIG. 3 is a block diagram of one embodiment of a graphics processing unit and interconnected processor; and -
FIG. 4 is a flow chart diagram of one embodiment of a method for processing in diagnostic medical ultrasound data with a graphics processing unit. - In one aspect, the routing of data for loading into a video memory of a GPU is controlled. In another aspect, the GPU performs image processing different than two- or three-dimensional rendering of an image. In another aspect, the GPU performs general mathematical computations. In yet another aspect, the GPU performs two or three-dimensional renderings of an image. In another aspect, a combination of the two aspects discussed above is provided. The different aspects may be used independently or separately in other embodiments. Immediately below, embodiments directed to loading data into the video memory are provided. Subsequently, embodiments directed to performing different processes with the GPU are provided.
-
FIG. 1 shows one embodiment of asystem 10 for loading avideo memory 12 of aGPU 14. Amemory control hub 16 interconnects theGPU 14 with theCPU 18, amemory 20 and a source ofdata 22. Additional, different or fewer components may be provided. For example, theGPU 14 connects to thesource 22 without one or more of thememory control hub 16, theCPU 18 and thememory 20. As another example, an additional component connects to thememory control hub 16. Thesystem 10 is a system configured pursuant to the AGP specification, but may be configured pursuant to different specifications, such as PCI, PCI-X, PCI Express, or arrangements with or without any of the various components. In one embodiment, thesystem 10 is a personal computer for generating graphical images, such as simulations. Thesystem 10 may also be used as a work station for generating graphical images from data representing an object, such as a scanned picture. In yet another embodiment, thesystem 10 is a medical imaging system, such as an X-ray, MRI, computer tomography, diagnostic ultrasound or other now known or later developed medical imaging system. - The
GPU 14 is a processor, circuit, application specific integrated circuit, digital signal processor, video card, combinations thereof or other now known or later developed device for graphics processing. In one embodiment, theGPU 14 is a graphics processor or video card provided by nVIDIA, ATI or Matrox. These or other devices using an API of OpenGL, DirectX or other now known or later developed APIs may be used. In one embodiment, theGPU 14 includes one or more vertex processors, such as 16 vertex processors, and one or more fragment processors, such as 64 fragment processing units. Other analog or digital devices may also be included, such as rasterization and interpolation circuits. One or more frame buffers may be provided for outputting data to a display. TheGPU 14 receives data in one or more formats and generates 2 or 3 dimensional images based on the data, such as by performing texture mapping or other 2 or 3 dimensional rendering. For example, the data received represents various objects with associated spatial relationships. TheGPU 14 is operable to determine the relative positioning of the data and generate fragments representing data visible from a particular viewing direction.GPU 14 is operable to decompress data, so that the bandwidth of data transferred to theGPU 14 is maximized through compression. Alternatively, uncompressed data is transferred to theGPU 14. - The
GPU 14 includes thevideo memory 12. In one embodiment, thevideo memory 12 is a random access memory, but other now known or later developed memories may be used. Thevideo memory 12 stores any of various amounts of information, such as 64, 128, 256 or other number of kilobytes. TheGPU 14 accesses information from thevideo memory 12 for graphics processing. Graphics processing is performed pursuant to the API run by theCPU 18. - The
CPU 18 is a general processor, application specific integrated circuit, dedicated processor, digital signal processor, digital circuit, analog circuit, combinations thereof or other now know or later developed processing device. In one embodiment, thecentral processing unit 18 is a processor operable to control a system pursuant to the AGP specification. In alternative embodiments, processors operating pursuant to the same or different specifications may be provided. In one embodiment, theCPU 18 is configured in a parallel processing arrangement, such as including two or more processors for controlling or processing data. Any various or now known or later developed processors may be used. TheCPU 18 connects with theGPU 14 for running an application programming interface of theGPU 14. TheCPU 18 provides instructions pursuant to the API for controlling the graphics rendering. TheCPU 18 implements a driver for theGPU 14 operable to accept pre-formatted data without processing by theCPU 18. TheCPU 18 also controls thememory control hub 16 and associatedmemory 20. In one embodiment, theCPU 18 controls or processes data from thesource 22. Alternatively, thesource 22 operates independently of theCPU 18. - The
memory 20 is a random access memory, such as arranged in one, two or more different chips or chip sets. Other now known or later developed memories may be used. Thememory 20 is connected with theCPU 18, such as through thememory control hub 16 for allowing theCPU 18 access to thememory 20. Thememory 20 is controlled by theCPU 18. In one embodiment, thememory 20 has a common address scheme accessible by thememory control hub 16 or theCPU 18. A section or group of addresses of thememory 20 is assigned as a graphics aperture region. The addresses associated with the graphics aperture region identify addresses for data to be transferred to thevideo memory 12. The graphics aperture region is generally not accessible for uses other than transfer of data to theGPU 14. In one embodiment, the size of the graphics aperture region matches the size of thevideo memory 12. In alternative embodiments, the graphics aperture region is accessible for other uses or may be a different size than thevideo memory 12. TheGPU 14, thememory control hub 16 or theCPU 18 causes data stored in the graphics aperture region to be transferred or copied to thevideo memory 12. In one embodiment, the graphics aperture region and common address scheme are configured as part of a contiguous memory controlled by a Graphics Address Re-mapping Table (GART) for controlling access to thememory 20. - In one embodiment, the graphics aperture region is operable to slide or change memory addresses in an address loop. The addresses of the graphics aperture region slide within the
memory 20 such that the start and end memory locations of the graphics aperture region can be incremented or decremented within the region. When the upper or lower ends of the graphics aperture region are reached, the address is shifted to the opposing lower or upper end respectively in a circular fashion. As a result, a memory loop of graphics data is provided within the graphics aperture region. In alternative embodiments, thememory 20 is divided up into separately accessible sections or includes multiple devices that are separately controlled and accessed. - The
source 22 is a database, memory, sensor, CD, disk drive, hard drive, tape, tape reader, modem, computer network, or other now known or later developed source of graphics data. In one embodiment, thesource 22 is a program or software operated by theCPU 18 to generate a graphics simulation. In another embodiment, the source ofdata 22 is a medical sensor, such as an x-ray, MRI, computer tomography or medical diagnostic ultrasound scanner. Medical diagnostic imaging data is provided by thesource 22. For example, thesource 22 is an ultrasound beamformer operable to receive acoustic echoes representing a patient. Ultrasound data includes in-phase and quadrature data from a beamformer and Spectral Doppler data. The ultrasound beamformer generates one or more data samples representing different spatial locations along a plurality of beams of a scan of a patient. Frames of data representing each two- or three-dimensional region are then output. The ultrasound data is detected using any of various detectors, such as B-mode, Doppler, harmonic, contrast agent or other now known or later developed detectors. In one embodiment, the ultrasound beamformer provides the data prior to beamforming, ultrasound data prior to detection, or ultrasound data after detection. The source ofdata 22 is connected with theGPU 14 either directly or through one or more devices as shown inFIG. 1 . - In one embodiment, the
source 22 includes a processor, a buffer, or formatter for configuring the data. For example, a buffer and processor are used with an ultrasound beamformer for GPU-specific formatting of texture data acquired in a polar coordinate format into three-dimensional texture rendering by theGPU 14. In one embodiment, theGPU 14 uses a format for three-dimensional texturing to optimize memory access speeds. The data is arranged in an order to provide a GPU-specific format for data transfer. In alternative embodiments, theGPU 14 includes a buffer, processor or formatter for GPU-specific formatting of the data. For 3D texture or other data, different formats, may be used for the data provided from thesource 22 or for the data used by theGPU 14. - In one embodiment, the
source 22 is operable to provide data representing a three-dimensional volume. For example, an ultrasound medical sensor and associated beamformer are operable to scan a patient in a three-dimensional or volume region. The scan is performed using a mechanically-moved or multi-dimensional transducer array to scan a volume by firing a plurality of ultrasound lines. Ultrasound data is then provided in sets of data representing a three-dimensional volume. More than one set may represent the same volume at a same time, such as providing a Doppler set and a B-mode set of data. For four-dimensional imaging, a plurality of sets representing the volume at different times is provided. As another example, two sets of data, processed differently, are used. Processing includes filtering. Spatial, frequency or other filtering may be provided for processing the data. One processed set of data is used for three-dimensional volume rendering. The other processed set of data is used for generating two-dimensional representations or slices of the volume. In one embodiment using a graphics aperture region for four-dimensional volume rendering, one processed set of data is overwritten as each set is acquired to conserve memory space. The other processed set of data is maintained throughout a time period for later processing, three-dimensional rendering or two-dimensional imaging. Alternatively, both processed data sets are stored representing volumes at multiple times. In one embodiment, sets of data are maintained until the extent of the graphics aperture region has been used. The addresses are then looped back to data representing the earliest set, and the more recently acquired data is overwritten in a CINE loop fashion. - The
memory control hub 16 is a processor, a bus, an application specific integrated circuit, an AGP chip set, an AGP controller, combination thereof or other now known or later developed device for interfacing between two or more of theGPU 14,CPU 18,memory 20 and thesource 22. In one embodiment, a single device is provided for the interface, such as a single circuit or chip, but multiple devices may be provided in any of various possible architectures for transferring data between any two or more of the devices connected with thememory control hub 16. In yet alternative embodiments, the various devices directly connect through a single data bus or with each other without thememory control hub 16.Memory control hub 16 connects with theGPU 14 with an accelerated graphics bus, connects with theCPU 18 with a host or front side bus, connects with thememory 20 with a memory bus, and connects with thesource 22 with a PCI-X or PCI acquisition bus. Different buses or signal lines may be used for any of the various connections discussed above, including now known or later developed connections. - Rather than routing the data from the
source 22 to thememory 20, then through theCPU 18 to the graphics aperture region of thememory 20, and finally from the graphics aperture region to thevideo memory 12, the data is routed from thesource 22 to the graphics aperture region or to thevideo memory 12 without copying or loading of the data by theCPU 18. The data from thesource 22 is routed using a driver, software or other control implemented by thememory control hub 16, theCPU 18, theGPU 14 or another device. The data from thesource 22 is operable to route to thevideo memory 12 through thememory control hub 16 without passing to theCPU 18. For example, the data for processing by theGPU 14 is not stored in the cache memory of theCPU 18. - In one embodiment, the data from the
source 22 is operable to be routed to thevideo memory 12 from thesource 22 through the graphics aperture region of thememory 20 without passing to theCPU 18. The data is written to thememory 20 directly into the graphics aperture region for transfer or copying to thevideo memory 12 by thememory control hub 16. In another embodiment, the data is operable to be routed to thevideo memory 12 from thesource 22 without passing to theCPU 18 or the associatedmemory 20. Thememory 20, including the graphics aperture region, is avoided by directly routing the data from thesource 22 to thevideo memory 12. -
FIG. 2 shows a method for loading a video memory of a graphics processing unit using thesystem 10 ofFIG. 1 or another system. The CPU interacts with a memory separate from the GPU. An improvement is provided by loading data into the video memory without storing the data in a cache of the CPU inact 24. As represented by the disconnect betweenacts 24 and acts 26, the CPU does not load the data from the source for the GPU into the cache of the CPU. The data is provided to theGPU 14 without copying by the CPU, such as without copying from one location of thememory 20 to a graphics aperture region of the memory. - In response to user selections or otherwise configuring the system, the CPU begins an application program using the GPU. For example, a user selects three-dimensional or four-dimensional imaging. The CPU then instructs the GPU to become the bus-master and download data from the graphics aperture region or the source. The video memory is loaded without processing the data by the CPU during the transfer. The data is transferred without GPU-specific formatting (e.g., swizzling) or copying by the CPU. The CPU performs control aspects of the data transfer by signaling the GPU and/or other devices.
- In
act 28, the data is transferred to a graphics aperture region of the memory associated with the CPU, such as a RAM. For example, ultrasound data is written from a beamformer or other medical sensor into the graphics aperture region of the memory ofFIG. 1 . The source writes the data directly into the graphics aperture region. In one embodiment, the data is formatted for use by the GPU and output from the source. As a result, the GPU-specific formatting by the CPU for three-dimensional texture data is avoided. The source performs any GPU-specific formatting. Alternatively, the data is provided to the graphics aperture region without a particular format for the GPU. In this example, the GPU-specific formatting is performed by the GPU after transfer to the GPU. - The data written into the graphics aperture region is transferred to the video memory without processing or copying of the data by the CPU. For example, the GPU acquires control of the bus or a portion of the bus connected with the memory having the graphics aperture region (i.e., GPU bus-masters.). The GPU then downloads the data from the graphics aperture region into the video memory. Alternatively, the CPU, the memory control hub, the source or another device controls one or more buses to cause the transfer of the data to the graphics aperture region. While the CPU is operable to run an application programming interface for controlling the GPU, the CPU operates free of copying data between different locations of the memory for transfer to the video memory.
- In one embodiment, the source writes the data to the graphics aperture region where the graphics aperture region slides by using an address loop as discussed above. All or a portion of the graphics aperture region uses the looping address structure to allow one type of data or all of the data to be configured in a loop fashion for representing a volume or area at different times.
- As an alternative to act 28, the source writes the data to the video memory in
act 30, such as transferring the data to the video memory without storing the data in a graphics aperture region. Based on control signals from the CPU or other device, the GPU controls the transfer or otherwise acquires the data from the source. Alternatively, the source, the memory control hub, or the CPU controls one or more buses for causing the data to be written to the video memory. - In alternative embodiments, some data output by the source is directed to the video memory without transfer to the graphics aperture region while other data is transferred to the video memory through the graphics aperture region. In yet other alternative embodiments, a subset of data may be copied by the CPU, stored in the cache of the CPU or otherwise processed by the CPU as part of the transfer to the video memory. CINE-buffering or otherwise providing storage of different representations of the same volume at different times is provided in the video memory or as a function of the timing of the output of data from the source.
- In either of the embodiments of
acts - By transferring data to the GPU without copying by the CPU, the transfer speed between the source and the GPU may be increased. Increased transfer speed may allow for increased volume rendering rates from three-dimensional or four-dimensional imaging. Any interrupts or other processing performed by the CPU may not delay the transfer of data to the GPU. Windows or other operating system latencies may have no or minimal affect on the volume rendering by the GPU. Increased volume rendering rates due to increased data transfer rates may allow for four-dimensional cardiology volume rendering. Overlapping pipeline sequences for transferring data or other operations to increase parallel transfers of data may also increase transfer rates.
-
FIG. 3 shows a graphicsprocessing unit system 32 for diagnostic medical ultrasound imaging. Thesystem 32 includes aGPU 14 in the configuration discussed above forFIG. 1 , in a configuration disclosed in any one of U.S. Pat. Nos. ______, and ______ (application Ser. Nos. 10/644,363, and 10/388,128), the disclosures of which are incorporated herein by reference, or other GPUs provided any where in medical diagnostic ultrasound systems (e.g., a system with an ultrasound transducer or a workstation for processing ultrasound data). - The GPU 34 includes a
programmable vertex processor 36, aprimitive assembly processor 38, a rasterization andinterpolation processor 40, aprogrammable fragment processor 42 and aframe buffer 44. Additional, different or fewer components may be provided. Any of the processors of the GPU 34 are general processors, digital circuits, analog circuits, application specific integrated circuits, digital processors, graphics accelerator card, display card or other devices now known or later developed. In one embodiment, the GPU 34 is implemented as a series of discreet devices on a mother board or as a daughter board, but may be implemented as a single chip, a circuit on a card or other layout. Theprogrammable vertex processor 36 is a group of 16 parallel processing units in one embodiment, but fewer or greater number of processors may be provided. Thefragment processor 42 is a parallel arrangement of 64 processing units in one embodiment, but more or fewer processing units may be provided. -
FIG. 3 shows the graphics processing pipeline standardized by APIs such as OpenGL and DirectX. The GPU 34 includes aprogrammable vertex processor 36, aprimitive assembly 38, a rasterization andinterpolation block 40, aprogrammable fragment processor 42 and a frame-buffer 44. The input to thevertex processor 36 is a set of vertices in two- or three-dimensional space. Each vertex has a set of attributes such as coordinates, color, texture coordinates, etc. Thevertex processor 36 transforms the coordinates of the vertices into a frame of reference. The output of thevertex processor 36 is a set of vertices with new attributes changed by thevertex processor 36. These vertices are fed into the next stage, theprimitive assembly 38. Here, the vertices are grouped together to form points, lines and triangles. These primitives are then fed into the rasterization andinterpolation stage 40. This stage rasterizes each primitive, such as points, lines and triangles, into a set of fragments. A fragment is a pixel with a depth associated with it and is located on a primitive. The fragments have attributes such as color, coordinates and texture coordinates, etc. The next stage,programmable fragment processor 42 takes in these fragments, applies various processes on them, and creates pixels. The pixels have attributes, such as color, and are written into the final stage, the frame-buffer 44. Other now known or later developed structures and processes may be used in the graphics pipeline for graphics rendering. The blocks shown inFIG. 3 are high level blocks. Each block contains many other finer processing stages. For example, the rasterization andinterpolation stage 40 can contain such operations such as Scissor Test, Alpha Test, Stencil Test, Depth Test, etc. Theframe buffer 44 is a memory, buffer or other device for receiving the pixels from thefragment processor 42 for display on thedisplay 46. - The GPU 34 is operable to receive graphics data and generate a display on the
display 46 from the graphics data. The process is performed pursuant to an application programming interface, such as GDI, GDI+, DirectX, OpenGL, or other APIs now know or later developed. Additionally or alternatively, the GPU 34 is used to process ultrasound data for other purposes than this immediate display. For example, in-phase and quadrature data, post detection data, log compressed data, scan converted or any other ultrasonic data is input to the GPU 34. Using theprogrammable vertex processor 36 and/or thefragment processor 42, the ultrasound data is processed. OpenGL, DirectX extensions or other programming languages, such as Cg shader language, program the GPU 34 to process ultrasound data. HLSL, Stanford's high-level shader language or other now known or later developed shader languages may also be used. Some resource intensive computations are performed by the GPU 34 rather than another processor, such as a CPU, DSP, ASIC or FPGA. Since the GPU 34 functions as a computational engine, one or more additional outputs are provided. For example, an output is provided downstream of theprogrammable vertex processor 36 but upstream of thefragment processor 42. As an alternative or additional output, an output is provided after thefragment processor 42. Alternatively, the output from the frame buffer is used. - Either or both of the
vertex processor 36 andfragment processor 42 are programmed to perform ultrasound data processing. For example, thevertex processor 36 is programmed or operable to perform scan conversion operations. Using the vector or matrix type polar coordinate data, the vertex processor reformats each special location into a format appropriate for a display. As another example, thefragment processor 42 is operable to perform Fourier transforms or non-linear scan conversion operations. Scan converted ultrasound data output by thevertex processor 36 is provided to theprogrammable fragment processor 42 for non-linear operations through interpolation or other fragment processes. - In one embodiment, the GPU 34 is operable to process ultrasound data and provide the processed data to a
different image processor 48. Theimage processor 48 provides data for thedisplay 46 or routes data back to the GPU 34 for rendering to thedisplay 46. - The
processor 48 is a general processor, applications specific integrated circuit, digital signal processor, image processor, FPGA, CPU, analog circuit, digital circuit, combinations thereof or other now known or later developed device for processing ultrasound data. Theprocessor 48 is operable to process ultrasound data output by the GPU 34. For example, theprocessor 48 and GPU 34 are provided as part of an ultrasound data path beginning at a beamformer and ending at thedisplay 46. The GPU 34 implements at least a part of one ultrasound process, such as receive beamformation, scan conversion, motion detection, other ultrasound process or combinations thereof. Theprocessor 48 implements at least part of a same or different ultrasound process, such as detection, motion tracking, beamforming, filtering, scan conversion, other ultrasound process, or combinations thereof. Thevertex processor 36 and thefragment processor 42 have independent instruction sets assembled by the shader language or other programming interface. Ultrasonic data sent to the GPU 34 is processed by the vertex processor and/orfragment processor 42 for implementing ultrasound imaging processes. - Since the GPU 34 may be less likely to be interrupted than a central processing unit or other processors, the GPU 34 may provide more consist or reliable image processing. While a clock rate is lower or higher, even lower clock rates may provide faster image processing given the parallel processing provided the GPU 34. The GPU 34 is capable of carrying out a large number of floating point parallel computations.
-
FIG. 4 shows a method for diagnostic medical ultrasound imaging with a graphics processing unit. Inact 56, a GPU processes ultrasound data. For example, at least a part of an ultrasound process of receive beamformation, scan conversion, motion detection or other ultrasound processes are performed by the GPU. In one embodiment, theprogrammable vertex processor 36 and/orprogrammable fragment processor 42 are used to perform the ultrasound process. GPUs have optimized architectures for vector and matrix data types. Vector and matrix processes are used for ultrasonic data processing, such as receive beamformation, scan conversion, motion tracking or correlation processes. - For example, receive beamformation is performed by combining data from a plurality of different channels, such as 128 channels over time. The GPU alters the relative position along a temporal dimension of the data across the channels with the vertex processor, and weights the data and combines the data associated with a particular location across the 128 or other number of channels with the fragment processor. As another example for receive beamformation, a fast Fourier transform and an inverse Fourier transformed are used for receive beamformation. The vertex processor passes the ultrasound data to the fragment processor. Fragment processor identifies a fragment value, finds a neighbor, and combines the fragments with a weight. Using feedback to the input of the
programmable fragment processor 42 or the input of the GPU, an iterative process is implemented to provide the fast Fourier transform. The data is then exported for further receive beamformation processing by theprocessor 48. Alternatively, the GPU 34 performs the further process. The GPU 34 is then used to apply an inverse fast Fourier transform. The inverse data represents beamformed data or plurality of samples representing different locations within a patient. The Fourier and inverse Fourier transforms implemented by the GPU 34 are described in U.S. Pat. No. ______ (Application Ser. No. ______) (Attorney reference no. 2001P20912US)), the disclosure which is incorporated herein by reference. - As another example, the vertex processor reformats data from a polar coordinate system unto a Cartesian coordinate system. For example, a scan conversion is implemented by assigning coordinates associated with current data as a function of the new format. A linear interpolation by the rasterization and interpolation processor completes the scan conversion. For non-linear scan conversion processes, the
fragment processor 42 implements the non-linear function. - As another example, motion tracking or motion detection using correlation or other processes is performed by the GPU 34. Any of the vertex processor or fragment processor may be used for implementing correlation functions, such as cross correlation or minimum sum of absolute differences. Other ultrasound processes may be performed with the GPU 34. Depending on the component of the GPU 34 implementing the process, any of the various outputs to the
image processor 48 may be used. - In
act 58, the ultrasound data output from the GPU 34 is processed with a different processor, the additional processing is performed prior to generating a display responsive to the data input to the GPU 34. For example, theprocessor 48 implements filtering, detection, scan conversion, beamformation, motion detection or another ultrasound process. The data output by theprocessor 48 is provided to thedisplay 46 either through or without passing through the GPU 34. For example, the GPU 34 processes data for a three dimensional representation using an ultrasound process. The data is then provided to theprocessor 48 and/or additional devices for further ultrasound processing, such as filtering. The ultrasound data is then provided back to the GPU 34 for graphics processing and output to thedisplay 46. - In
act 60, the ultrasound data is output to the display. The display is responsive to the ultrasound data processed one or more times by the GPU 34 and another or different processor. The GPU has multiple programmable processors while being relatively cheap. The large parallel processing capability is less susceptible to interrupts than processors operating pursuant to an operating system. Using high-level languages, the GPU is programmed to perform an ultrasound process. - While the invention has been described above by reference to various embodiments, it should be understood that any changes and modifications can be made without departing from the scope of the invention. For example, the GPU implements only graphics processing or may implement the ultrasound processing as well as graphics processing. As another example, the GPU for implementing ultrasound processing is provided in a system different than described above for
FIG. 1 . Similarly, the system described forFIG. 1 uses the GPU for graphics processing or other volume rendering of ultrasound or non-ultrasound data without ultrasound processing by the GPU. The drivers or software may be adapted for use with reprogrammable processors or GPU, such as provided by reprogramming an FPGA during use or by service personal. - It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims including all equivalents, that are intended to define the spirit and the scope of this invention.
Claims (34)
1. (canceled)
2. (canceled)
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. (canceled)
9. (canceled)
10. (canceled)
11. (canceled)
12. (canceled)
13. (canceled)
14. (canceled)
15. (canceled)
16. In a method for loading a video memory of a graphics processing unit where a central processing unit interacts with a second memory and the graphics processing unit, an improvement comprising:
(a) loading data pre-formatted into a format expected by the graphics processing unit into the video memory without storing the data in a cache of the central processing unit.
17. The method of claim 16 wherein (a) comprises:
(a1) transferring the data to a graphics aperture region of the second memory from a source of data; and
(a2) transferring the data from the graphics aperture region to the video memory without processing of the data by the central processing unit.
18. The method of claim 17 wherein (a1) comprises writing ultrasound data from a beamformer into the graphics aperture region.
19. The method of claim 17 further comprising:
(b) sliding the graphics aperture region of the second memory in an address loop.
20. The method of claim 16 wherein (a) comprises transferring the data to the video memory without storing the data in a graphics aperture region.
21. The method of claim 16 wherein the central processing unit is operable to run an application programming interface for the graphics processing unit and operable to operate free of copying data between different locations of the second memory for transfer to the video memory, the second memory being a random access memory accessible to the central processing unit through a hub, the video memory connectable to the second memory through an accelerated graphics port of the hub.
22. The method of claim 16 wherein the data is formatted for the graphics processing unit without processing by the central processing unit.
23. The method of claim 16 further comprising:
(b) compressing the data prior to (a); and
(c) decompressing the data after (a) with the graphics processing unit.
24. A system for loading a video memory of a graphics processing unit, the system comprising:
a central processing unit connected with the graphics processing unit, the central processing unit operable to run an application programming interface of the graphics processing unit;
a source of data connected with the graphics processing unit;
a first memory connected with the graphics processing unit and the central processing unit; and
a memory control hub connected with the central processing unit, a video memory of the graphics processing unit, the source and the first memory;
wherein data is transferable from the source to the video memory without copying of the data by the central processing unit.
25. The system of claim 24 wherein the memory control hub is operable to route the data from the source to the video memory through the memory control hub without passing to the central processing unit.
26. The system of claim 24 wherein the first memory has a graphics aperture region connected with the central processing unit, the data from the source operable to route to the video memory from the source through the graphics aperture region without passing to the central processing unit.
27. The system of claim 26 wherein the graphics aperture region is operable to slide in an address loop.
28. The system of claim 26 wherein the memory control hub connects with the graphics processing unit with an accelerated graphics bus, connects with the central processing unit with a host bus, and connects with the second memory with a memory bus.
29. The system of claim 24 wherein the first memory connects with the central processing unit, the data from the source operable to route to the video memory from the source without passing to the central processing unit and without passing to the first memory, the first memory being a random access memory of the central processing unit.
30. The system of claim 24 wherein the central processing unit includes a cache memory, the data transferring to the video memory without storing the data in the cache memory.
31. The system of claim 24 wherein the source of data comprises a medical sensor, the data being medical diagnostic imaging data.
32. The system of claim 24 wherein the source of data comprises an ultrasound beamformer, the data being ultrasound data.
33. The system of claim 24 wherein the source of data is operable to format the data for the graphics processing unit without processing by the central processing unit.
34. The system of claim 24 wherein the data comprises compressed data and the graphic processing unit is operable to decompress the data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/060,046 US20050140682A1 (en) | 2003-12-05 | 2005-02-16 | Graphics processing unit for simulation or medical diagnostic imaging |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/728,666 US7119810B2 (en) | 2003-12-05 | 2003-12-05 | Graphics processing unit for simulation or medical diagnostic imaging |
US11/060,046 US20050140682A1 (en) | 2003-12-05 | 2005-02-16 | Graphics processing unit for simulation or medical diagnostic imaging |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/728,666 Division US7119810B2 (en) | 2003-12-05 | 2003-12-05 | Graphics processing unit for simulation or medical diagnostic imaging |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050140682A1 true US20050140682A1 (en) | 2005-06-30 |
Family
ID=34633763
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/728,666 Expired - Lifetime US7119810B2 (en) | 2003-12-05 | 2003-12-05 | Graphics processing unit for simulation or medical diagnostic imaging |
US11/060,046 Abandoned US20050140682A1 (en) | 2003-12-05 | 2005-02-16 | Graphics processing unit for simulation or medical diagnostic imaging |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/728,666 Expired - Lifetime US7119810B2 (en) | 2003-12-05 | 2003-12-05 | Graphics processing unit for simulation or medical diagnostic imaging |
Country Status (1)
Country | Link |
---|---|
US (2) | US7119810B2 (en) |
Cited By (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070294458A1 (en) * | 2006-06-15 | 2007-12-20 | Radoslav Danilak | Bus interface controller for cost-effective high performance graphics system with two or more graphics processing units |
US20070291039A1 (en) * | 2006-06-15 | 2007-12-20 | Radoslav Danilak | Graphics processing unit for cost effective high performance graphics system with two or more graphics processing units |
US20070294454A1 (en) * | 2006-06-15 | 2007-12-20 | Radoslav Danilak | Motherboard for cost-effective high performance graphics system with two or more graphics processing units |
US20080018642A1 (en) * | 2004-05-17 | 2008-01-24 | Stefan Brabec | Volume rendering processing distribution in a graphics processing unit |
US20090248941A1 (en) * | 2008-03-31 | 2009-10-01 | Advanced Micro Devices, Inc. | Peer-To-Peer Special Purpose Processor Architecture and Method |
US7616202B1 (en) | 2005-08-12 | 2009-11-10 | Nvidia Corporation | Compaction of z-only samples |
US20100088453A1 (en) * | 2008-10-03 | 2010-04-08 | Ati Technologies Ulc | Multi-Processor Architecture and Method |
US20100088452A1 (en) * | 2008-10-03 | 2010-04-08 | Advanced Micro Devices, Inc. | Internal BUS Bridge Architecture and Method in Multi-Processor Systems |
US7886094B1 (en) | 2005-06-15 | 2011-02-08 | Nvidia Corporation | Method and system for handshaking configuration between core logic components and graphics processors |
US20110074792A1 (en) * | 2009-09-30 | 2011-03-31 | Pai-Chi Li | Ultrasonic image processing system and ultrasonic image processing method thereof |
WO2014099763A1 (en) * | 2012-12-21 | 2014-06-26 | Jason Spencer | System and method for graphical processing of medical data |
US9002125B2 (en) | 2012-10-15 | 2015-04-07 | Nvidia Corporation | Z-plane compression with z-plane predictors |
CN104731569A (en) * | 2013-12-23 | 2015-06-24 | 华为技术有限公司 | Data processing method and relevant equipment |
US9092170B1 (en) * | 2005-10-18 | 2015-07-28 | Nvidia Corporation | Method and system for implementing fragment operation processing across a graphics bus interconnect |
US9105250B2 (en) | 2012-08-03 | 2015-08-11 | Nvidia Corporation | Coverage compaction |
CN105094981A (en) * | 2014-05-23 | 2015-11-25 | 华为技术有限公司 | Method and device for processing data |
US9286673B2 (en) | 2012-10-05 | 2016-03-15 | Volcano Corporation | Systems for correcting distortions in a medical image and methods of use thereof |
US9292918B2 (en) | 2012-10-05 | 2016-03-22 | Volcano Corporation | Methods and systems for transforming luminal images |
US9301687B2 (en) | 2013-03-13 | 2016-04-05 | Volcano Corporation | System and method for OCT depth calibration |
US9307926B2 (en) | 2012-10-05 | 2016-04-12 | Volcano Corporation | Automatic stent detection |
US9324141B2 (en) | 2012-10-05 | 2016-04-26 | Volcano Corporation | Removal of A-scan streaking artifact |
US9360630B2 (en) | 2011-08-31 | 2016-06-07 | Volcano Corporation | Optical-electrical rotary joint and methods of use |
US9367965B2 (en) | 2012-10-05 | 2016-06-14 | Volcano Corporation | Systems and methods for generating images of tissue |
US9383263B2 (en) | 2012-12-21 | 2016-07-05 | Volcano Corporation | Systems and methods for narrowing a wavelength emission of light |
US9478940B2 (en) | 2012-10-05 | 2016-10-25 | Volcano Corporation | Systems and methods for amplifying light |
US9486143B2 (en) | 2012-12-21 | 2016-11-08 | Volcano Corporation | Intravascular forward imaging device |
US9578224B2 (en) | 2012-09-10 | 2017-02-21 | Nvidia Corporation | System and method for enhanced monoimaging |
US9596993B2 (en) | 2007-07-12 | 2017-03-21 | Volcano Corporation | Automatic calibration systems and methods of use |
US9612105B2 (en) | 2012-12-21 | 2017-04-04 | Volcano Corporation | Polarization sensitive optical coherence tomography system |
US9622706B2 (en) | 2007-07-12 | 2017-04-18 | Volcano Corporation | Catheter for in vivo imaging |
CN106600521A (en) * | 2016-11-30 | 2017-04-26 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal device |
US9709379B2 (en) | 2012-12-20 | 2017-07-18 | Volcano Corporation | Optical coherence tomography system that is reconfigurable between different imaging modes |
US9730613B2 (en) | 2012-12-20 | 2017-08-15 | Volcano Corporation | Locating intravascular images |
US9770172B2 (en) | 2013-03-07 | 2017-09-26 | Volcano Corporation | Multimodal segmentation in intravascular images |
US9829715B2 (en) | 2012-01-23 | 2017-11-28 | Nvidia Corporation | Eyewear device for transmitting signal and communication method thereof |
US9858668B2 (en) | 2012-10-05 | 2018-01-02 | Volcano Corporation | Guidewire artifact removal in images |
US9867530B2 (en) | 2006-08-14 | 2018-01-16 | Volcano Corporation | Telescopic side port catheter device with imaging system and method for accessing side branch occlusions |
US9906981B2 (en) | 2016-02-25 | 2018-02-27 | Nvidia Corporation | Method and system for dynamic regulation and control of Wi-Fi scans |
US10058284B2 (en) | 2012-12-21 | 2018-08-28 | Volcano Corporation | Simultaneous imaging, monitoring, and therapy |
US10070827B2 (en) | 2012-10-05 | 2018-09-11 | Volcano Corporation | Automatic image playback |
US10166003B2 (en) | 2012-12-21 | 2019-01-01 | Volcano Corporation | Ultrasound imaging with variable line density |
US10191220B2 (en) | 2012-12-21 | 2019-01-29 | Volcano Corporation | Power-efficient optical circuit |
US10219887B2 (en) | 2013-03-14 | 2019-03-05 | Volcano Corporation | Filters with echogenic characteristics |
US10219780B2 (en) | 2007-07-12 | 2019-03-05 | Volcano Corporation | OCT-IVUS catheter for concurrent luminal imaging |
US10226597B2 (en) | 2013-03-07 | 2019-03-12 | Volcano Corporation | Guidewire with centering mechanism |
US10238367B2 (en) | 2012-12-13 | 2019-03-26 | Volcano Corporation | Devices, systems, and methods for targeted cannulation |
US10292677B2 (en) | 2013-03-14 | 2019-05-21 | Volcano Corporation | Endoluminal filter having enhanced echogenic properties |
US10413317B2 (en) | 2012-12-21 | 2019-09-17 | Volcano Corporation | System and method for catheter steering and operation |
US10420530B2 (en) | 2012-12-21 | 2019-09-24 | Volcano Corporation | System and method for multipath processing of image signals |
US10426590B2 (en) | 2013-03-14 | 2019-10-01 | Volcano Corporation | Filters with echogenic characteristics |
US10536709B2 (en) | 2011-11-14 | 2020-01-14 | Nvidia Corporation | Prioritized compression for video |
US10568586B2 (en) | 2012-10-05 | 2020-02-25 | Volcano Corporation | Systems for indicating parameters in an imaging data set and methods of use |
US10595820B2 (en) | 2012-12-20 | 2020-03-24 | Philips Image Guided Therapy Corporation | Smooth transition catheters |
US10638939B2 (en) | 2013-03-12 | 2020-05-05 | Philips Image Guided Therapy Corporation | Systems and methods for diagnosing coronary microvascular disease |
US10724082B2 (en) | 2012-10-22 | 2020-07-28 | Bio-Rad Laboratories, Inc. | Methods for analyzing DNA |
US10758207B2 (en) | 2013-03-13 | 2020-09-01 | Philips Image Guided Therapy Corporation | Systems and methods for producing an image from a rotational intravascular ultrasound device |
US10935788B2 (en) | 2014-01-24 | 2021-03-02 | Nvidia Corporation | Hybrid virtual 3D rendering approach to stereovision |
US10942022B2 (en) | 2012-12-20 | 2021-03-09 | Philips Image Guided Therapy Corporation | Manual calibration of imaging system |
US10939826B2 (en) | 2012-12-20 | 2021-03-09 | Philips Image Guided Therapy Corporation | Aspirating and removing biological material |
US10993694B2 (en) | 2012-12-21 | 2021-05-04 | Philips Image Guided Therapy Corporation | Rotational ultrasound imaging catheter with extended catheter body telescope |
US11026591B2 (en) | 2013-03-13 | 2021-06-08 | Philips Image Guided Therapy Corporation | Intravascular pressure sensor calibration |
US11040140B2 (en) | 2010-12-31 | 2021-06-22 | Philips Image Guided Therapy Corporation | Deep vein thrombosis therapeutic methods |
US11141063B2 (en) | 2010-12-23 | 2021-10-12 | Philips Image Guided Therapy Corporation | Integrated system architectures and methods of use |
US11154313B2 (en) | 2013-03-12 | 2021-10-26 | The Volcano Corporation | Vibrating guidewire torquer and methods of use |
US11272845B2 (en) | 2012-10-05 | 2022-03-15 | Philips Image Guided Therapy Corporation | System and method for instant and automatic border detection |
US11406498B2 (en) | 2012-12-20 | 2022-08-09 | Philips Image Guided Therapy Corporation | Implant delivery system and implants |
Families Citing this family (111)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003209194A1 (en) | 2002-01-08 | 2003-07-24 | Seven Networks, Inc. | Secure transport for mobile communication network |
US7853563B2 (en) | 2005-08-01 | 2010-12-14 | Seven Networks, Inc. | Universal data aggregation |
US8468126B2 (en) | 2005-08-01 | 2013-06-18 | Seven Networks, Inc. | Publishing data in an information community |
US7917468B2 (en) | 2005-08-01 | 2011-03-29 | Seven Networks, Inc. | Linking of personal information management data |
US7119810B2 (en) * | 2003-12-05 | 2006-10-10 | Siemens Medical Solutions Usa, Inc. | Graphics processing unit for simulation or medical diagnostic imaging |
US20050275760A1 (en) * | 2004-03-02 | 2005-12-15 | Nvidia Corporation | Modifying a rasterized surface, such as by trimming |
US7554538B2 (en) * | 2004-04-02 | 2009-06-30 | Nvidia Corporation | Video processing, such as for hidden surface reduction or removal |
US7460117B2 (en) * | 2004-05-25 | 2008-12-02 | Siemens Medical Solutions Usa, Inc. | Sliding texture volume rendering |
US7593021B1 (en) * | 2004-09-13 | 2009-09-22 | Nvidia Corp. | Optional color space conversion |
JP4450853B2 (en) * | 2004-09-16 | 2010-04-14 | エヌヴィディア コーポレイション | load distribution |
WO2006045102A2 (en) | 2004-10-20 | 2006-04-27 | Seven Networks, Inc. | Method and apparatus for intercepting events in a communication system |
US8010082B2 (en) | 2004-10-20 | 2011-08-30 | Seven Networks, Inc. | Flexible billing architecture |
US7744538B2 (en) * | 2004-11-01 | 2010-06-29 | Siemens Medical Solutions Usa, Inc. | Minimum arc velocity interpolation for three-dimensional ultrasound imaging |
US7706781B2 (en) | 2004-11-22 | 2010-04-27 | Seven Networks International Oy | Data security in a mobile e-mail service |
FI117152B (en) | 2004-12-03 | 2006-06-30 | Seven Networks Internat Oy | E-mail service provisioning method for mobile terminal, involves using domain part and further parameters to generate new parameter set in list of setting parameter sets, if provisioning of e-mail service is successful |
US7752633B1 (en) | 2005-03-14 | 2010-07-06 | Seven Networks, Inc. | Cross-platform event engine |
US7796742B1 (en) | 2005-04-21 | 2010-09-14 | Seven Networks, Inc. | Systems and methods for simplified provisioning |
US8438633B1 (en) | 2005-04-21 | 2013-05-07 | Seven Networks, Inc. | Flexible real-time inbox access |
US7852335B2 (en) * | 2005-05-09 | 2010-12-14 | Siemens Medical Solutions Usa, Inc. | Volume rendering processing distribution in a graphics processing unit |
US7764818B2 (en) * | 2005-06-20 | 2010-07-27 | Siemens Medical Solutions Usa, Inc. | Surface parameter adaptive ultrasound image processing |
WO2006136660A1 (en) | 2005-06-21 | 2006-12-28 | Seven Networks International Oy | Maintaining an ip connection in a mobile network |
US8069166B2 (en) | 2005-08-01 | 2011-11-29 | Seven Networks, Inc. | Managing user-to-user contact with inferred presence information |
US7769395B2 (en) | 2006-06-20 | 2010-08-03 | Seven Networks, Inc. | Location-based operations and messaging |
US8425418B2 (en) * | 2006-05-18 | 2013-04-23 | Eigen, Llc | Method of ultrasonic imaging and biopsy of the prostate |
US8648867B2 (en) | 2006-09-25 | 2014-02-11 | Neurala Llc | Graphic processor based accelerator system and method |
US9241683B2 (en) * | 2006-10-04 | 2016-01-26 | Ardent Sound Inc. | Ultrasound system and method for imaging and/or measuring displacement of moving tissue and fluid |
US8064664B2 (en) | 2006-10-18 | 2011-11-22 | Eigen, Inc. | Alignment method for registering medical images |
US7804989B2 (en) * | 2006-10-30 | 2010-09-28 | Eigen, Inc. | Object recognition system for medical imaging |
JP5022700B2 (en) * | 2006-12-27 | 2012-09-12 | 株式会社東芝 | Ultrasonic diagnostic equipment |
US20080161687A1 (en) * | 2006-12-29 | 2008-07-03 | Suri Jasjit S | Repeat biopsy system |
US8175350B2 (en) * | 2007-01-15 | 2012-05-08 | Eigen, Inc. | Method for tissue culture extraction |
US20080186378A1 (en) * | 2007-02-06 | 2008-08-07 | Feimo Shen | Method and apparatus for guiding towards targets during motion |
US7856130B2 (en) * | 2007-03-28 | 2010-12-21 | Eigen, Inc. | Object recognition system for medical imaging |
US9892659B2 (en) | 2007-05-21 | 2018-02-13 | Johnson County Community College Foundation, Inc. | Medical device and procedure simulation and training |
US9280916B2 (en) | 2007-05-21 | 2016-03-08 | Johnson County Community College Foundation, Inc. | Healthcare training system and method |
US9916773B2 (en) | 2007-05-21 | 2018-03-13 | Jc3 Innovations, Llc | Medical device and procedure simulation and training |
US8251703B2 (en) * | 2007-05-21 | 2012-08-28 | Johnson County Community College Foundation, Inc. | Healthcare training system and method |
US9886874B2 (en) | 2007-05-21 | 2018-02-06 | Johnson County Community College Foundation, Inc. | Medical device and procedure simulation and training |
US10186172B2 (en) | 2007-05-21 | 2019-01-22 | Jc3 Innovations, Llc | Blood glucose testing and monitoring system and method |
US9905135B2 (en) | 2007-05-21 | 2018-02-27 | Jc3 Innovations, Llc | Medical device and procedure simulation and training |
US8693494B2 (en) | 2007-06-01 | 2014-04-08 | Seven Networks, Inc. | Polling |
US8805425B2 (en) | 2007-06-01 | 2014-08-12 | Seven Networks, Inc. | Integrated messaging |
US20090048515A1 (en) * | 2007-08-14 | 2009-02-19 | Suri Jasjit S | Biopsy planning system |
US9347765B2 (en) * | 2007-10-05 | 2016-05-24 | Volcano Corporation | Real time SD-OCT with distributed acquisition and processing |
US8571277B2 (en) * | 2007-10-18 | 2013-10-29 | Eigen, Llc | Image interpolation for medical imaging |
US7942829B2 (en) * | 2007-11-06 | 2011-05-17 | Eigen, Inc. | Biopsy planning and display apparatus |
KR101132524B1 (en) * | 2007-11-09 | 2012-05-18 | 삼성메디슨 주식회사 | Ultrasound imaging system including graphic processing unit |
US8364181B2 (en) | 2007-12-10 | 2013-01-29 | Seven Networks, Inc. | Electronic-mail filtering for mobile devices |
US8793305B2 (en) | 2007-12-13 | 2014-07-29 | Seven Networks, Inc. | Content delivery to a mobile device from a content service |
US9002828B2 (en) | 2007-12-13 | 2015-04-07 | Seven Networks, Inc. | Predictive content delivery |
US8107921B2 (en) | 2008-01-11 | 2012-01-31 | Seven Networks, Inc. | Mobile virtual network operator |
US20090324041A1 (en) * | 2008-01-23 | 2009-12-31 | Eigen, Llc | Apparatus for real-time 3d biopsy |
US8862657B2 (en) | 2008-01-25 | 2014-10-14 | Seven Networks, Inc. | Policy based content service |
US20090193338A1 (en) | 2008-01-28 | 2009-07-30 | Trevor Fiatal | Reducing network and battery consumption during content delivery and playback |
US20100001996A1 (en) * | 2008-02-28 | 2010-01-07 | Eigen, Llc | Apparatus for guiding towards targets during motion using gpu processing |
US8787947B2 (en) | 2008-06-18 | 2014-07-22 | Seven Networks, Inc. | Application discovery on mobile devices |
US8078158B2 (en) | 2008-06-26 | 2011-12-13 | Seven Networks, Inc. | Provisioning applications for a mobile device |
EP2143384A1 (en) * | 2008-07-09 | 2010-01-13 | Medison Co., Ltd. | Enhanced ultrasound data processing in an ultrasound system |
US8909759B2 (en) | 2008-10-10 | 2014-12-09 | Seven Networks, Inc. | Bandwidth measurement |
US8780122B2 (en) * | 2009-09-16 | 2014-07-15 | Nvidia Corporation | Techniques for transferring graphics data from system memory to a discrete GPU |
WO2011126889A2 (en) | 2010-03-30 | 2011-10-13 | Seven Networks, Inc. | 3d mobile user interface with configurable workspace management |
US8838783B2 (en) | 2010-07-26 | 2014-09-16 | Seven Networks, Inc. | Distributed caching for resource and mobile network traffic management |
PL3407673T3 (en) | 2010-07-26 | 2020-05-18 | Seven Networks, Llc | Mobile network traffic coordination across multiple applications |
WO2012018556A2 (en) | 2010-07-26 | 2012-02-09 | Ari Backholm | Mobile application traffic optimization |
WO2012018477A2 (en) | 2010-07-26 | 2012-02-09 | Seven Networks, Inc. | Distributed implementation of dynamic wireless traffic policy |
US8326985B2 (en) | 2010-11-01 | 2012-12-04 | Seven Networks, Inc. | Distributed management of keep-alive message signaling for mobile network resource conservation and optimization |
EP2635973A4 (en) | 2010-11-01 | 2014-01-15 | Seven Networks Inc | Caching adapted for mobile application behavior and network conditions |
US9060032B2 (en) | 2010-11-01 | 2015-06-16 | Seven Networks, Inc. | Selective data compression by a distributed traffic management system to reduce mobile data traffic and signaling traffic |
US9330196B2 (en) | 2010-11-01 | 2016-05-03 | Seven Networks, Llc | Wireless traffic management system cache optimization using http headers |
US8190701B2 (en) | 2010-11-01 | 2012-05-29 | Seven Networks, Inc. | Cache defeat detection and caching of content addressed by identifiers intended to defeat cache |
US8484314B2 (en) | 2010-11-01 | 2013-07-09 | Seven Networks, Inc. | Distributed caching in a wireless network of content delivered for a mobile application over a long-held request |
WO2012060995A2 (en) | 2010-11-01 | 2012-05-10 | Michael Luna | Distributed caching in a wireless network of content delivered for a mobile application over a long-held request |
WO2012060997A2 (en) | 2010-11-01 | 2012-05-10 | Michael Luna | Application and network-based long poll request detection and cacheability assessment therefor |
US8843153B2 (en) | 2010-11-01 | 2014-09-23 | Seven Networks, Inc. | Mobile traffic categorization and policy for network use optimization while preserving user experience |
WO2012071384A2 (en) | 2010-11-22 | 2012-05-31 | Michael Luna | Optimization of resource polling intervals to satisfy mobile device requests |
EP2596658B1 (en) | 2010-11-22 | 2018-05-09 | Seven Networks, LLC | Aligning data transfer to optimize connections established for transmission over a wireless network |
GB2501416B (en) | 2011-01-07 | 2018-03-21 | Seven Networks Llc | System and method for reduction of mobile network traffic used for domain name system (DNS) queries |
GB2517815A (en) | 2011-04-19 | 2015-03-04 | Seven Networks Inc | Shared resource and virtual resource management in a networked environment |
GB2504037B (en) | 2011-04-27 | 2014-12-24 | Seven Networks Inc | Mobile device which offloads requests made by a mobile application to a remote entity for conservation of mobile device and network resources |
US8621075B2 (en) | 2011-04-27 | 2013-12-31 | Seven Metworks, Inc. | Detecting and preserving state for satisfying application requests in a distributed proxy and cache system |
WO2013015995A1 (en) | 2011-07-27 | 2013-01-31 | Seven Networks, Inc. | Automatic generation and distribution of policy information regarding malicious mobile traffic in a wireless network |
US8934414B2 (en) | 2011-12-06 | 2015-01-13 | Seven Networks, Inc. | Cellular or WiFi mobile traffic optimization based on public or private network destination |
WO2013086225A1 (en) | 2011-12-06 | 2013-06-13 | Seven Networks, Inc. | A mobile device and method to utilize the failover mechanisms for fault tolerance provided for mobile traffic management and network/device resource conservation |
EP2788889A4 (en) | 2011-12-07 | 2015-08-12 | Seven Networks Inc | Flexible and dynamic integration schemas of a traffic management system with various network operators for network traffic alleviation |
WO2013086447A1 (en) | 2011-12-07 | 2013-06-13 | Seven Networks, Inc. | Radio-awareness of mobile device for sending server-side control signals using a wireless network optimized transport protocol |
US20130159511A1 (en) | 2011-12-14 | 2013-06-20 | Seven Networks, Inc. | System and method for generating a report to a network operator by distributing aggregation of data |
US8861354B2 (en) | 2011-12-14 | 2014-10-14 | Seven Networks, Inc. | Hierarchies and categories for management and deployment of policies for distributed wireless traffic optimization |
WO2013090834A1 (en) | 2011-12-14 | 2013-06-20 | Seven Networks, Inc. | Operation modes for mobile traffic optimization and concurrent management of optimized and non-optimized traffic |
WO2013103988A1 (en) | 2012-01-05 | 2013-07-11 | Seven Networks, Inc. | Detection and management of user interactions with foreground applications on a mobile device in distributed caching |
WO2013116856A1 (en) | 2012-02-02 | 2013-08-08 | Seven Networks, Inc. | Dynamic categorization of applications for network access in a mobile network |
WO2013116852A1 (en) | 2012-02-03 | 2013-08-08 | Seven Networks, Inc. | User as an end point for profiling and optimizing the delivery of content and data in a wireless network |
US8812695B2 (en) | 2012-04-09 | 2014-08-19 | Seven Networks, Inc. | Method and system for management of a virtual network connection without heartbeat messages |
US20130268656A1 (en) | 2012-04-10 | 2013-10-10 | Seven Networks, Inc. | Intelligent customer service/call center services enhanced using real-time and historical mobile application and traffic-related statistics collected by a distributed caching system in a mobile network |
CN102631222B (en) * | 2012-04-26 | 2014-01-01 | 珠海医凯电子科技有限公司 | CPU (Central Processing Unit)-based ultrasonic imaging scanning transform method |
US8775631B2 (en) | 2012-07-13 | 2014-07-08 | Seven Networks, Inc. | Dynamic bandwidth adjustment for browsing or streaming activity in a wireless network based on prediction of user behavior when interacting with mobile applications |
US9161258B2 (en) | 2012-10-24 | 2015-10-13 | Seven Networks, Llc | Optimized and selective management of policy deployment to mobile clients in a congested network to prevent further aggravation of network congestion |
US20140177497A1 (en) | 2012-12-20 | 2014-06-26 | Seven Networks, Inc. | Management of mobile device radio state promotion and demotion |
US9271238B2 (en) | 2013-01-23 | 2016-02-23 | Seven Networks, Llc | Application or context aware fast dormancy |
US8874761B2 (en) | 2013-01-25 | 2014-10-28 | Seven Networks, Inc. | Signaling optimization in a wireless network for traffic utilizing proprietary and non-proprietary protocols |
US8750123B1 (en) | 2013-03-11 | 2014-06-10 | Seven Networks, Inc. | Mobile device equipped with mobile network congestion recognition to make intelligent decisions regarding connecting to an operator network |
US9065765B2 (en) | 2013-07-22 | 2015-06-23 | Seven Networks, Inc. | Proxy server associated with a mobile carrier for enhancing mobile traffic management in a mobile network |
US10157481B2 (en) | 2014-09-23 | 2018-12-18 | Samsung Electronics Co., Ltd. | Apparatus for processing medical image and method of processing medical image thereof |
WO2016047989A1 (en) | 2014-09-23 | 2016-03-31 | Samsung Electronics Co., Ltd. | Apparatus for processing medical image and method of processing medical image thereof |
US9779466B2 (en) | 2015-05-07 | 2017-10-03 | Microsoft Technology Licensing, Llc | GPU operation |
CN106327419B (en) * | 2015-06-24 | 2019-12-13 | 龙芯中科技术有限公司 | Method and device for distributing display blocks in GPU display list |
US10716544B2 (en) | 2015-10-08 | 2020-07-21 | Zmk Medical Technologies Inc. | System for 3D multi-parametric ultrasound imaging |
EP3361949B1 (en) * | 2015-10-14 | 2020-06-17 | Curvebeam LLC | System for three dimensional measurement of foot alignment |
US20190228545A1 (en) * | 2016-09-28 | 2019-07-25 | Covidien Lp | System and method for parallelization of cpu and gpu processing for ultrasound imaging devices |
CN109308282A (en) * | 2017-07-28 | 2019-02-05 | 幻视互动(北京)科技有限公司 | A kind of parallel architecture method and device being used in MR mixed reality equipment |
US10868950B2 (en) | 2018-12-12 | 2020-12-15 | Karl Storz Imaging, Inc. | Systems and methods for operating video medical scopes using a virtual camera control unit |
CN111464699B (en) * | 2020-04-02 | 2022-10-04 | 北京小米移动软件有限公司 | Call background display method, device and storage medium |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682896A (en) * | 1996-03-28 | 1997-11-04 | Diasonics Ultrasound, Inc. | Method and apparatus for generating volume flow measurement |
US5709209A (en) * | 1996-03-29 | 1998-01-20 | Siemens Medical Systems, Inc. | Ultrasound signal processing system |
US5767863A (en) * | 1993-10-22 | 1998-06-16 | Auravision Corporation | Video processing technique using multi-buffer video memory |
US5787889A (en) * | 1996-12-18 | 1998-08-04 | University Of Washington | Ultrasound imaging with real time 3D image reconstruction and visualization |
US6413219B1 (en) * | 1999-03-31 | 2002-07-02 | General Electric Company | Three-dimensional ultrasound data display using multiple cut planes |
US6417857B2 (en) * | 1997-12-31 | 2002-07-09 | Acuson Corporation | System architecture and method for operating a medical diagnostic ultrasound system |
US6464641B1 (en) * | 1998-12-01 | 2002-10-15 | Ge Medical Systems Global Technology Company Llc | Method and apparatus for automatic vessel tracking in ultrasound imaging |
US6556199B1 (en) * | 1999-08-11 | 2003-04-29 | Advanced Research And Technology Institute | Method and apparatus for fast voxelization of volumetric models |
US6629926B1 (en) * | 1997-12-31 | 2003-10-07 | Acuson Corporation | Ultrasonic system and method for storing data |
US6685641B2 (en) * | 2002-02-01 | 2004-02-03 | Siemens Medical Solutions Usa, Inc. | Plane wave scanning reception and receiver |
US6784894B2 (en) * | 2000-08-24 | 2004-08-31 | Sun Microsystems, Inc. | Mapping time-sorted to direction-sorted triangle vertices |
US20040193042A1 (en) * | 2003-03-27 | 2004-09-30 | Steven Scampini | Guidance of invasive medical devices by high resolution three dimensional ultrasonic imaging |
US6852081B2 (en) * | 2003-03-13 | 2005-02-08 | Siemens Medical Solutions Usa, Inc. | Volume rendering in the acoustic grid methods and systems for ultrasound diagnostic imaging |
US20050043619A1 (en) * | 2003-08-20 | 2005-02-24 | Siemens Medical Solutions Usa, Inc. | Computing spatial derivatives for medical diagnostic imaging methods and systems |
US6896657B2 (en) * | 2003-05-23 | 2005-05-24 | Scimed Life Systems, Inc. | Method and system for registering ultrasound image in three-dimensional coordinate system |
US20050110793A1 (en) * | 2003-11-21 | 2005-05-26 | Steen Erik N. | Methods and systems for graphics processing in a medical imaging system |
US7119810B2 (en) * | 2003-12-05 | 2006-10-10 | Siemens Medical Solutions Usa, Inc. | Graphics processing unit for simulation or medical diagnostic imaging |
-
2003
- 2003-12-05 US US10/728,666 patent/US7119810B2/en not_active Expired - Lifetime
-
2005
- 2005-02-16 US US11/060,046 patent/US20050140682A1/en not_active Abandoned
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5767863A (en) * | 1993-10-22 | 1998-06-16 | Auravision Corporation | Video processing technique using multi-buffer video memory |
US5682896A (en) * | 1996-03-28 | 1997-11-04 | Diasonics Ultrasound, Inc. | Method and apparatus for generating volume flow measurement |
US5709209A (en) * | 1996-03-29 | 1998-01-20 | Siemens Medical Systems, Inc. | Ultrasound signal processing system |
US5787889A (en) * | 1996-12-18 | 1998-08-04 | University Of Washington | Ultrasound imaging with real time 3D image reconstruction and visualization |
US6629926B1 (en) * | 1997-12-31 | 2003-10-07 | Acuson Corporation | Ultrasonic system and method for storing data |
US6417857B2 (en) * | 1997-12-31 | 2002-07-09 | Acuson Corporation | System architecture and method for operating a medical diagnostic ultrasound system |
US6464641B1 (en) * | 1998-12-01 | 2002-10-15 | Ge Medical Systems Global Technology Company Llc | Method and apparatus for automatic vessel tracking in ultrasound imaging |
US6413219B1 (en) * | 1999-03-31 | 2002-07-02 | General Electric Company | Three-dimensional ultrasound data display using multiple cut planes |
US6556199B1 (en) * | 1999-08-11 | 2003-04-29 | Advanced Research And Technology Institute | Method and apparatus for fast voxelization of volumetric models |
US6784894B2 (en) * | 2000-08-24 | 2004-08-31 | Sun Microsystems, Inc. | Mapping time-sorted to direction-sorted triangle vertices |
US6685641B2 (en) * | 2002-02-01 | 2004-02-03 | Siemens Medical Solutions Usa, Inc. | Plane wave scanning reception and receiver |
US6852081B2 (en) * | 2003-03-13 | 2005-02-08 | Siemens Medical Solutions Usa, Inc. | Volume rendering in the acoustic grid methods and systems for ultrasound diagnostic imaging |
US20040193042A1 (en) * | 2003-03-27 | 2004-09-30 | Steven Scampini | Guidance of invasive medical devices by high resolution three dimensional ultrasonic imaging |
US6896657B2 (en) * | 2003-05-23 | 2005-05-24 | Scimed Life Systems, Inc. | Method and system for registering ultrasound image in three-dimensional coordinate system |
US20050043619A1 (en) * | 2003-08-20 | 2005-02-24 | Siemens Medical Solutions Usa, Inc. | Computing spatial derivatives for medical diagnostic imaging methods and systems |
US20050110793A1 (en) * | 2003-11-21 | 2005-05-26 | Steen Erik N. | Methods and systems for graphics processing in a medical imaging system |
US7119810B2 (en) * | 2003-12-05 | 2006-10-10 | Siemens Medical Solutions Usa, Inc. | Graphics processing unit for simulation or medical diagnostic imaging |
Cited By (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8018454B2 (en) | 2004-05-17 | 2011-09-13 | Siemens Medical Solutions Usa, Inc. | Volume rendering processing distribution in a graphics processing unit |
US20080018642A1 (en) * | 2004-05-17 | 2008-01-24 | Stefan Brabec | Volume rendering processing distribution in a graphics processing unit |
US7886094B1 (en) | 2005-06-15 | 2011-02-08 | Nvidia Corporation | Method and system for handshaking configuration between core logic components and graphics processors |
US7616202B1 (en) | 2005-08-12 | 2009-11-10 | Nvidia Corporation | Compaction of z-only samples |
US9092170B1 (en) * | 2005-10-18 | 2015-07-28 | Nvidia Corporation | Method and system for implementing fragment operation processing across a graphics bus interconnect |
US7500041B2 (en) * | 2006-06-15 | 2009-03-03 | Nvidia Corporation | Graphics processing unit for cost effective high performance graphics system with two or more graphics processing units |
US20080222340A1 (en) * | 2006-06-15 | 2008-09-11 | Nvidia Corporation | Bus Interface Controller For Cost-Effective HIgh Performance Graphics System With Two or More Graphics Processing Units |
US7562174B2 (en) | 2006-06-15 | 2009-07-14 | Nvidia Corporation | Motherboard having hard-wired private bus between graphics cards |
US7412554B2 (en) | 2006-06-15 | 2008-08-12 | Nvidia Corporation | Bus interface controller for cost-effective high performance graphics system with two or more graphics processing units |
US7617348B2 (en) | 2006-06-15 | 2009-11-10 | Nvidia Corporation | Bus interface controller for cost-effective high performance graphics system with two or more graphics processing units |
US20070294454A1 (en) * | 2006-06-15 | 2007-12-20 | Radoslav Danilak | Motherboard for cost-effective high performance graphics system with two or more graphics processing units |
US20070294458A1 (en) * | 2006-06-15 | 2007-12-20 | Radoslav Danilak | Bus interface controller for cost-effective high performance graphics system with two or more graphics processing units |
US20070291039A1 (en) * | 2006-06-15 | 2007-12-20 | Radoslav Danilak | Graphics processing unit for cost effective high performance graphics system with two or more graphics processing units |
US9867530B2 (en) | 2006-08-14 | 2018-01-16 | Volcano Corporation | Telescopic side port catheter device with imaging system and method for accessing side branch occlusions |
US10219780B2 (en) | 2007-07-12 | 2019-03-05 | Volcano Corporation | OCT-IVUS catheter for concurrent luminal imaging |
US11350906B2 (en) | 2007-07-12 | 2022-06-07 | Philips Image Guided Therapy Corporation | OCT-IVUS catheter for concurrent luminal imaging |
US9596993B2 (en) | 2007-07-12 | 2017-03-21 | Volcano Corporation | Automatic calibration systems and methods of use |
US9622706B2 (en) | 2007-07-12 | 2017-04-18 | Volcano Corporation | Catheter for in vivo imaging |
US20090248941A1 (en) * | 2008-03-31 | 2009-10-01 | Advanced Micro Devices, Inc. | Peer-To-Peer Special Purpose Processor Architecture and Method |
US8161209B2 (en) * | 2008-03-31 | 2012-04-17 | Advanced Micro Devices, Inc. | Peer-to-peer special purpose processor architecture and method |
US8892804B2 (en) | 2008-10-03 | 2014-11-18 | Advanced Micro Devices, Inc. | Internal BUS bridge architecture and method in multi-processor systems |
US20100088453A1 (en) * | 2008-10-03 | 2010-04-08 | Ati Technologies Ulc | Multi-Processor Architecture and Method |
US9977756B2 (en) | 2008-10-03 | 2018-05-22 | Advanced Micro Devices, Inc. | Internal bus architecture and method in multi-processor systems |
US20100088452A1 (en) * | 2008-10-03 | 2010-04-08 | Advanced Micro Devices, Inc. | Internal BUS Bridge Architecture and Method in Multi-Processor Systems |
US8373709B2 (en) * | 2008-10-03 | 2013-02-12 | Ati Technologies Ulc | Multi-processor architecture and method |
US20110074792A1 (en) * | 2009-09-30 | 2011-03-31 | Pai-Chi Li | Ultrasonic image processing system and ultrasonic image processing method thereof |
US11141063B2 (en) | 2010-12-23 | 2021-10-12 | Philips Image Guided Therapy Corporation | Integrated system architectures and methods of use |
US11040140B2 (en) | 2010-12-31 | 2021-06-22 | Philips Image Guided Therapy Corporation | Deep vein thrombosis therapeutic methods |
US9360630B2 (en) | 2011-08-31 | 2016-06-07 | Volcano Corporation | Optical-electrical rotary joint and methods of use |
US10536709B2 (en) | 2011-11-14 | 2020-01-14 | Nvidia Corporation | Prioritized compression for video |
US9829715B2 (en) | 2012-01-23 | 2017-11-28 | Nvidia Corporation | Eyewear device for transmitting signal and communication method thereof |
US9105250B2 (en) | 2012-08-03 | 2015-08-11 | Nvidia Corporation | Coverage compaction |
US9578224B2 (en) | 2012-09-10 | 2017-02-21 | Nvidia Corporation | System and method for enhanced monoimaging |
US11864870B2 (en) | 2012-10-05 | 2024-01-09 | Philips Image Guided Therapy Corporation | System and method for instant and automatic border detection |
US10568586B2 (en) | 2012-10-05 | 2020-02-25 | Volcano Corporation | Systems for indicating parameters in an imaging data set and methods of use |
US9478940B2 (en) | 2012-10-05 | 2016-10-25 | Volcano Corporation | Systems and methods for amplifying light |
US10070827B2 (en) | 2012-10-05 | 2018-09-11 | Volcano Corporation | Automatic image playback |
US9286673B2 (en) | 2012-10-05 | 2016-03-15 | Volcano Corporation | Systems for correcting distortions in a medical image and methods of use thereof |
US11272845B2 (en) | 2012-10-05 | 2022-03-15 | Philips Image Guided Therapy Corporation | System and method for instant and automatic border detection |
US9307926B2 (en) | 2012-10-05 | 2016-04-12 | Volcano Corporation | Automatic stent detection |
US9292918B2 (en) | 2012-10-05 | 2016-03-22 | Volcano Corporation | Methods and systems for transforming luminal images |
US9367965B2 (en) | 2012-10-05 | 2016-06-14 | Volcano Corporation | Systems and methods for generating images of tissue |
US11890117B2 (en) | 2012-10-05 | 2024-02-06 | Philips Image Guided Therapy Corporation | Systems for indicating parameters in an imaging data set and methods of use |
US11510632B2 (en) | 2012-10-05 | 2022-11-29 | Philips Image Guided Therapy Corporation | Systems for indicating parameters in an imaging data set and methods of use |
US9324141B2 (en) | 2012-10-05 | 2016-04-26 | Volcano Corporation | Removal of A-scan streaking artifact |
US9858668B2 (en) | 2012-10-05 | 2018-01-02 | Volcano Corporation | Guidewire artifact removal in images |
US9002125B2 (en) | 2012-10-15 | 2015-04-07 | Nvidia Corporation | Z-plane compression with z-plane predictors |
US10724082B2 (en) | 2012-10-22 | 2020-07-28 | Bio-Rad Laboratories, Inc. | Methods for analyzing DNA |
US10238367B2 (en) | 2012-12-13 | 2019-03-26 | Volcano Corporation | Devices, systems, and methods for targeted cannulation |
US9730613B2 (en) | 2012-12-20 | 2017-08-15 | Volcano Corporation | Locating intravascular images |
US9709379B2 (en) | 2012-12-20 | 2017-07-18 | Volcano Corporation | Optical coherence tomography system that is reconfigurable between different imaging modes |
US10939826B2 (en) | 2012-12-20 | 2021-03-09 | Philips Image Guided Therapy Corporation | Aspirating and removing biological material |
US10942022B2 (en) | 2012-12-20 | 2021-03-09 | Philips Image Guided Therapy Corporation | Manual calibration of imaging system |
US11141131B2 (en) | 2012-12-20 | 2021-10-12 | Philips Image Guided Therapy Corporation | Smooth transition catheters |
US11892289B2 (en) | 2012-12-20 | 2024-02-06 | Philips Image Guided Therapy Corporation | Manual calibration of imaging system |
US10595820B2 (en) | 2012-12-20 | 2020-03-24 | Philips Image Guided Therapy Corporation | Smooth transition catheters |
US11406498B2 (en) | 2012-12-20 | 2022-08-09 | Philips Image Guided Therapy Corporation | Implant delivery system and implants |
US10413317B2 (en) | 2012-12-21 | 2019-09-17 | Volcano Corporation | System and method for catheter steering and operation |
US9612105B2 (en) | 2012-12-21 | 2017-04-04 | Volcano Corporation | Polarization sensitive optical coherence tomography system |
US10332228B2 (en) | 2012-12-21 | 2019-06-25 | Volcano Corporation | System and method for graphical processing of medical data |
US9383263B2 (en) | 2012-12-21 | 2016-07-05 | Volcano Corporation | Systems and methods for narrowing a wavelength emission of light |
US10420530B2 (en) | 2012-12-21 | 2019-09-24 | Volcano Corporation | System and method for multipath processing of image signals |
US10993694B2 (en) | 2012-12-21 | 2021-05-04 | Philips Image Guided Therapy Corporation | Rotational ultrasound imaging catheter with extended catheter body telescope |
JP2016508757A (en) * | 2012-12-21 | 2016-03-24 | ジェイソン スペンサー, | System and method for graphical processing of medical data |
US11786213B2 (en) | 2012-12-21 | 2023-10-17 | Philips Image Guided Therapy Corporation | System and method for multipath processing of image signals |
WO2014099763A1 (en) * | 2012-12-21 | 2014-06-26 | Jason Spencer | System and method for graphical processing of medical data |
US11253225B2 (en) | 2012-12-21 | 2022-02-22 | Philips Image Guided Therapy Corporation | System and method for multipath processing of image signals |
US9486143B2 (en) | 2012-12-21 | 2016-11-08 | Volcano Corporation | Intravascular forward imaging device |
US10058284B2 (en) | 2012-12-21 | 2018-08-28 | Volcano Corporation | Simultaneous imaging, monitoring, and therapy |
US10191220B2 (en) | 2012-12-21 | 2019-01-29 | Volcano Corporation | Power-efficient optical circuit |
US10166003B2 (en) | 2012-12-21 | 2019-01-01 | Volcano Corporation | Ultrasound imaging with variable line density |
US9770172B2 (en) | 2013-03-07 | 2017-09-26 | Volcano Corporation | Multimodal segmentation in intravascular images |
US10226597B2 (en) | 2013-03-07 | 2019-03-12 | Volcano Corporation | Guidewire with centering mechanism |
US11154313B2 (en) | 2013-03-12 | 2021-10-26 | The Volcano Corporation | Vibrating guidewire torquer and methods of use |
US10638939B2 (en) | 2013-03-12 | 2020-05-05 | Philips Image Guided Therapy Corporation | Systems and methods for diagnosing coronary microvascular disease |
US11026591B2 (en) | 2013-03-13 | 2021-06-08 | Philips Image Guided Therapy Corporation | Intravascular pressure sensor calibration |
US9301687B2 (en) | 2013-03-13 | 2016-04-05 | Volcano Corporation | System and method for OCT depth calibration |
US10758207B2 (en) | 2013-03-13 | 2020-09-01 | Philips Image Guided Therapy Corporation | Systems and methods for producing an image from a rotational intravascular ultrasound device |
US10292677B2 (en) | 2013-03-14 | 2019-05-21 | Volcano Corporation | Endoluminal filter having enhanced echogenic properties |
US10426590B2 (en) | 2013-03-14 | 2019-10-01 | Volcano Corporation | Filters with echogenic characteristics |
US10219887B2 (en) | 2013-03-14 | 2019-03-05 | Volcano Corporation | Filters with echogenic characteristics |
WO2015096649A1 (en) * | 2013-12-23 | 2015-07-02 | 华为技术有限公司 | Data processing method and related device |
CN104731569A (en) * | 2013-12-23 | 2015-06-24 | 华为技术有限公司 | Data processing method and relevant equipment |
US10935788B2 (en) | 2014-01-24 | 2021-03-02 | Nvidia Corporation | Hybrid virtual 3D rendering approach to stereovision |
CN105094981A (en) * | 2014-05-23 | 2015-11-25 | 华为技术有限公司 | Method and device for processing data |
US9906981B2 (en) | 2016-02-25 | 2018-02-27 | Nvidia Corporation | Method and system for dynamic regulation and control of Wi-Fi scans |
CN106600521A (en) * | 2016-11-30 | 2017-04-26 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal device |
Also Published As
Publication number | Publication date |
---|---|
US20050122333A1 (en) | 2005-06-09 |
US7119810B2 (en) | 2006-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7119810B2 (en) | Graphics processing unit for simulation or medical diagnostic imaging | |
US7714855B2 (en) | Volume rendering processing distribution in a graphics processing unit | |
JP3860859B2 (en) | Computer graphics system with high performance primitive clipping preprocessing | |
US5821950A (en) | Computer graphics system utilizing parallel processing for enhanced performance | |
US5274760A (en) | Extendable multiple image-buffer for graphics systems | |
US6852081B2 (en) | Volume rendering in the acoustic grid methods and systems for ultrasound diagnostic imaging | |
US7037263B2 (en) | Computing spatial derivatives for medical diagnostic imaging methods and systems | |
US5649173A (en) | Hardware architecture for image generation and manipulation | |
US7764818B2 (en) | Surface parameter adaptive ultrasound image processing | |
JP2001034781A (en) | Volume graphics device | |
US20050237336A1 (en) | Method and system for multi-object volumetric data visualization | |
US20170329928A1 (en) | Method for high-speed parallel processing for ultrasonic signal by using smart device | |
US6341174B1 (en) | Selective rendering method and system for rapid 3 dimensional imaging | |
JPH03241480A (en) | Engine for drawing parallel polygon/picture element | |
US8243086B1 (en) | Variable length data compression using a geometry shading unit | |
US8254701B1 (en) | Data compression using a geometry shading unit | |
US7310103B2 (en) | Pipelined 2D viewport clip circuit | |
US20080012852A1 (en) | Volume rendering processing distribution in a graphics processing unit | |
US8295621B1 (en) | Data decompression using a geometry shading unit | |
CN117292039B (en) | Vertex coordinate generation method, vertex coordinate generation device, electronic equipment and computer storage medium | |
US7490208B1 (en) | Architecture for compact multi-ported register file | |
Kuo et al. | Interactive volume rendering of real-time three-dimensional ultrasound images | |
JP2005332195A (en) | Texture unit, image drawing apparatus, and texel transfer method | |
US7145570B2 (en) | Magnified texture-mapped pixel performance in a single-pixel pipeline | |
US6003098A (en) | Graphic accelerator architecture using two graphics processing units for processing aspects of pre-rasterized graphics primitives and a control circuitry for relaying pass-through information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |