US20120257807A1 - Method and System for Detection of Contrast Injection Fluoroscopic Image Sequences - Google Patents

Method and System for Detection of Contrast Injection Fluoroscopic Image Sequences Download PDF

Info

Publication number
US20120257807A1
US20120257807A1 US13/455,619 US201213455619A US2012257807A1 US 20120257807 A1 US20120257807 A1 US 20120257807A1 US 201213455619 A US201213455619 A US 201213455619A US 2012257807 A1 US2012257807 A1 US 2012257807A1
Authority
US
United States
Prior art keywords
contrast injection
volume
fluoroscopic image
image sequence
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/455,619
Inventor
Benjamin J. Sapp
Wei Zhang
Bogdan Georgescu
Simone Prummer
Dorin Comaniciu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Priority to US13/455,619 priority Critical patent/US20120257807A1/en
Publication of US20120257807A1 publication Critical patent/US20120257807A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to detection of contrast injection in fluoroscopic image sequences, and more particularly to detection of where and when a contrast agent is injected in a fluoroscopic image sequence.
  • Coronary angiography is a procedure that is recommended preoperatively for patients who are suffering from or at risk for coronary artery disease.
  • Angiography is a medical imaging technique in which X-ray images are used to visualize internal blood filled structures, such as arteries, veins, and the heart chambers. Since blood has the same radiodensity as the surrounding tissues, these blood filled structures cannot be differentiated from the surrounding tissue using conventional radiology.
  • a catheter is inserted into a blood vessel, typically in the groin or the arm. The catheter is guided and positioned either in the heart or in arteries near the heart, and a contrast agent is added to the blood via the catheter to make the blood vessels in the heart visible via X-ray. As the contrast agent travels down the branches of the coronary artery, the vessel branches become visible in the X-ray (fluoroscopic) image. The X-ray images are taken over a period of time, which results in a sequence of fluoroscopic images.
  • the moment when the contrast is injected provides important temporal information for the automatic analysis of vessels.
  • This temporal information can be used to trigger the starting of automatic vessel detection.
  • this temporal information can be used in implementing Digital Subtraction Angiography (DSA), which detects vessels by subtracting a pre-contrast image or “mask image” from later fluoroscopic images once the contrast agent has been introduced.
  • DSA Digital Subtraction Angiography
  • the spatial location of the contrast injection point can be used as a starting point for vessel detection and tracking. Accordingly, it is desirable to detect the time and location of a contrast injection in a fluoroscopic image sequence.
  • the present invention provides a method and system for detecting the temporal and spatial location of a contrast injection in a sequence of fluoroscopic images.
  • Embodiments of the present invention detect the contrast injection in a fluoroscopic image sequence as a 3-dimensional detection problem, with two spatial dimensions and one time dimension.
  • training volumes are received.
  • Each training volumes is generated by stacking a sequence of 2D fluoroscopic images in time order and has two spatial dimensions and one temporal dimension.
  • Each training volume is annotated with a ground truth contrast injection point.
  • a heart rate is globally estimated for each training volume, and local frequency and phase is estimated in a neighborhood of the ground truth contrast injection point for each training volume.
  • Frequency and phase invariant features are extracted from each training volume based on the heart rate, local frequency and phase.
  • a detector for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence, is trained based on the training volumes and the features extracted for each training volume.
  • the detector can be trained using a probabilistic boosting tree (PBT).
  • PBT probabilistic boosting tree
  • a fluoroscopic image sequence is received, and a 3D volume is generated by stacking the fluoroscopic image sequence.
  • the 3D volume has two spatial dimensions and one temporal dimension, and can be generated by stacking the 2D fluoroscopic images in time order and interpolating the stacked 2D fluoroscopic images to generate a continuous 3D volume.
  • the spatial and temporal location of the contrast injection in the fluoroscopic image sequence is then detected by processing the 3D volume using a trained contrast injection detector.
  • the trained contrast injection detector can be trained using a PBT based on training examples and frequency and phase invariant features extracted from the training examples.
  • FIG. 1 illustrates an exemplary fluoroscopic image showing the start of a contrast injection
  • FIG. 2 illustrates an exemplary volume generated from a sequence of 2D fluoroscopic images
  • FIG. 3 illustrates a method of training a detector for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence according to an embodiment of the present invention
  • FIG. 4 illustrates an exemplary volume of stacked fluoroscopic images and slices of the volume
  • FIG. 5 illustrates an exemplary intensity profile and power spectrum
  • FIG. 6 illustrates exemplary positive training examples
  • FIG. 7 illustrates exemplary negative training examples
  • FIG. 8 illustrates a method for detecting a spatial and temporal location of contrast injection in a fluoroscopic image sequence using a trained detector according to an embodiment of the present invention
  • FIG. 9 illustrates exemplary contrast injection detection results
  • FIG. 10 is a high level block diagram of a computer capable of implementing the present invention.
  • the present invention relates to detection of a contrast injection time and location in a fluoroscopic image sequence. Embodiments of the present invention are described herein to give a visual understanding of the contrast injection detection method.
  • a digital image is often composed of digital representations of one or more objects (or shapes).
  • the digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
  • FIG. 1 illustrates an exemplary fluoroscopic image showing the start of a contrast injection.
  • image 100 is a fluoroscopic image with an annotated start location 102 of a contrast injection.
  • a sequence of 2D fluoroscopic images can be stacked in order to generate a 3D volume with two spatial dimensions (x and y) and a time dimension (t).
  • FIG. 2 illustrates an exemplary volume generated from a sequence of 2D fluoroscopic images.
  • image 202 is a 3D volume generated by stacking 2D fluoroscopic images in time order.
  • Image 204 is a slice of volume 202 along a spatial-temporal plane, which shows the contrast in the blood vessels in the temporal domain.
  • 3D volume detection can be used to determine a location in the volume for the contrast injection.
  • the location in the volume gives both the spatial location (x,y) and the temporal location (t) of the contrast injection.
  • Embodiments of the present invention utilize the facts that vessel motion is mainly because of the heart beating and the heart beating is periodic.
  • image 204 of FIG. 2 the contrast in the blood vessels in the temporal domain appears as a sinusoid wave 206 , whereas the area without contrast on spatial-temporal slices will appear flat.
  • Fourier transforms can be used to characterize the period motion that starts at the contrast injection point.
  • the heart rate can be globally estimated from the time sequence angiogram, and the volume can be normalized with respect to the heart rate frequency to extract features that are invariant to patients' heart rates.
  • Local phase and frequency estimation can also be performed to extract features invariant to local phase and frequency.
  • the features can be used in training a detector to detect the location of the contrast injection in the volume.
  • FIG. 3 illustrates a method of training a detector for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence according to an embodiment of the present invention.
  • training volumes of stacked fluoroscopic image sequences are received.
  • Each training volume is a series of 2D fluoroscopic images stacked in time order to form a 3D volume with two spatial dimensions (x,y) and a temporal dimension (t).
  • the discrete frames of the fluoroscopic image sequence are stacked and interpolated based on the sampling rate of the sequence to generate a continuous 3D volume.
  • Each training volume is annotated with the location (spatial and temporal) of a ground truth contrast injection point in the volume.
  • FIG. 4 illustrates an exemplary volume of stacked fluoroscopic images and slices of the volume.
  • image 402 is a 3D volume of a stack fluoroscopic image sequence.
  • Image 404 is a slice of volume 402 in the x-y plane.
  • Image 404 is an original 3D fluoroscopic image of the sequence.
  • Image 406 is a slice of volume 402 in the x-t plane, and image 408 is a slice of volume 402 in the y-t plane.
  • the location 410 of the contrast injection is shown in images 404 , 406 , and 408 . In image 404 , the location 410 shows the spatial location, and in images 406 and 408 the location 410 shows the temporal location.
  • the heart rate is globally estimated for each sequence (training volume).
  • the heart is estimated for a time sequence angiogram (fluoroscopic image sequence) via Fourier analysis.
  • DFT Discrete Fourier transform
  • FIG. 5 illustrates an exemplary intensity profile and power spectrum.
  • graph 502 shows an intensity profile of a point with a fixed (x,y).
  • the horizontal axis of graph 502 is time and the vertical axis of graph 502 is the intensity value with the mean subtracted.
  • Graph 504 shows the power spectrum of the intensity profile of graph 502 .
  • the peak 506 in the power spectrum 504 corresponds to the frequency of the periodic motion in the volume. Since the periodic motion is due to the heart beating, the frequency of the periodic motion is the heart rate.
  • the estimated period based on the power spectrum 504 is 21.787 pixels. This corresponds to the pixel distance between peaks in the volume.
  • the strong peak in the power spectrum corresponds to 47 in the x axis, which means the frequency is 47.
  • local phase and frequency is estimated in a neighborhood of the ground truth contrast injection point for each sequence (training volume).
  • the local phase and frequency are also estimated using Fourier analysis, but signal generation is different than in the global estimation of the frequency.
  • the local phase is determined in a neighborhood from (x,y ⁇ w/2) to (x, y+2), where w is the window size.
  • the location of the maximum intensity from each value from (x,y ⁇ w/2) to (x,y+w/2) is stored for each value of t.
  • S n the DFT of s x,y,w [t] is calculated, and the local frequency f is estimated as a local maximum in the power spectrum which lies in a reasonable range for heart rates.
  • the phase is estimated as tan ⁇ 1 (imag(S n (f))/real(S n (f))).
  • frequency and phase invariant features are extracted from each sequence (training volume) using the estimated heart rate, local frequency, and phase.
  • the sub-window is from (x,y ⁇ a,t_start) to (x,y+a,t_end).
  • a can be fixed at 20 pixels.
  • features are then defined relative to the sub-window and parameterized by a height and a shift as fractions of the amplitude and period. This provides invariance to differing phases and local frequencies in the same volume in different sub-windows, as well as different phases and frequencies (global heart rates and local frequencies) in different sequences.
  • the height and shift can be discretized into 10 and 20 values, respectively.
  • features are generated based on intensity, gradient, difference in intensity one period ahead, and difference in intensity half a period ahead with inverted amplitude.
  • mean intensity features are generated for all heights (i.e., mean of the current column is the sub-window), and the difference in location of the maximum value in the previous and next shifts.
  • Features are also generated that are global to the whole sub-window based on differences in intensity values in frames previous to the candidate location and in frames after the candidate location, as well as a feature based on the correlation between pixels for two consecutive heart beat periods. This means that more features are generated around the candidate location.
  • a detector is trained based on the training volumes and the features extracted from the training volumes.
  • each of the training volumes is annotated with the location of a ground truth contrast injection point. These ground truth locations are used as positive training examples, and other locations in the volumes are used as negative training examples.
  • FIG. 6 illustrates exemplary positive training examples.
  • images 610 , 620 , 630 , 640 , 650 , 660 , 670 , 680 , and 690 are partial slices of training volumes and are annotated with ground truth contrast injection locations 612 , 622 , 632 , 642 , 652 , 662 , 672 , 682 , and 692 , respectively.
  • images 702 , 704 , 706 , 708 , 710 , and 712 are partial slices of training volumes with no contrast injection points.
  • the detector can be trained based on the positive and negative training examples using a probabilistic boosting tree (PBT) with the extracted features.
  • PBT probabilistic boosting tree
  • a PBT detector is trained by recursively constructing a tree, where each of the nodes represents a strong classifier. Once the strong classifier of each node is trained, the input training data for the node is classified into two sets (positives and negatives) using the learned strong classifier. The two new sets are fed to left and right child nodes respectively to train the left and right child nodes. In this way, the PBT classifier is constructed recursively.
  • FIG. 8 illustrates a method for detecting a spatial and temporal location of contrast injection in a fluoroscopic image sequence using a trained detector according to an embodiment of the present invention.
  • a fluoroscopic image sequence is received.
  • the fluoroscopic image can be received directly from an X-ray imaging device or can be loaded, for example from a memory or storage of a computer system, or some other computer readable medium.
  • a 3D volume is generated from the fluoroscopic image sequence by stacking the 2D fluoroscopic images in the sequence.
  • the fluoroscopic images are stacked in time order, and the discrete images are interpolated based on a sampling rate to generate a continuous 3D volume.
  • the trained detector is used to detect the spatial and temporal location of the contrast injection point in the fluoroscopic image sequence.
  • the detector is trained using the method of FIG. 3 .
  • the trained detector searches the volume to detect a contrast injection point (x,y,t) in the volume.
  • the detector determines probabilities for candidate points in the volume to determine the point with the highest probability of being a contrast injection point.
  • the contrast injection point (x,y,t) gives the spatial location (x,y) of the contrast injection and the temporal location (t) of the contrast injection.
  • FIG. 9 illustrates exemplary contrast injection detection results.
  • the detection results of FIG. 9 were detected, as described in the method of FIG. 8 , with a detector trained using the method of FIG. 3 .
  • image 902 shows a detected contrast injection point 906 in the spatial domain
  • image 904 shows the detected contrast injection point 906 in the temporal domain.
  • the location of the contrast injection point 906 in image 902 is the spatial location is the spatial location of the contrast injection
  • the location of the contrast injection point 906 in image 904 is the temporal location of the contrast injection.
  • the spatial and temporal location of a contrast injection point can be used in automated image processing methods, such as vessel extraction or segmentation methods.
  • automated vessel segmentation methods such as coronary digital subtraction angiography (DSA)
  • DSA coronary digital subtraction angiography
  • Such automated vessel segmentation methods can restrict segmentation to after the detected contrast injection point in a fluoroscopic image sequence.
  • the spatial and temporal location of the contrast injection point can also be used in coronary DSA to determine a pre-contrast image, as well as for virtual contrast to determine the model cycle.
  • Computer 1002 contains a processor 1004 which controls the overall operation of the computer 1002 by executing computer program instructions which define such operation.
  • the computer program instructions may be stored in a storage device 1012 , or other computer readable medium (e.g., magnetic disk, CD ROM, etc.), and loaded into memory 1010 when execution of the computer program instructions is desired.
  • FIGS. 3 and 8 can be defined by the computer program instructions stored in the memory 1010 and/or storage 1012 and controlled by the processor 1004 executing the computer program instructions.
  • An X-ray imaging device 1020 can be connected to the computer 1002 to input X-ray images to the computer 1002 . It is possible to implement the X-ray imaging device 1020 and the computer 1002 as one device. It is also possible that the X-ray imaging device 1020 and the computer 1002 communicate wirelessly through a network.
  • the computer 1002 also includes one or more network interfaces 1006 for communicating with other devices via a network.
  • the computer 1002 also includes input/output devices 1008 that enable user interaction with the computer 1002 (e.g., display, keyboard, mouse, speakers, buttons, etc.)
  • FIG. 10 is a high level representation of some of the components of such a computer for illustrative purposes.

Abstract

A method and system for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence is disclosed. Training volumes generated by stacking a sequence of 2D fluoroscopic images in time order are annotated with ground truth contrast injection points. A heart rate is globally estimated for each training volume, and local frequency and phase is estimated in a neighborhood of the ground truth contrast injection point for each training volume. Frequency and phase invariant features are extracted from each training volume based on the heart rate, local frequency and phase, and a detector is trained based on the training volumes and the features extracted for each training volume. The detector can be used to detect the spatial and temporal location of a contrast injection in a fluoroscopic image sequence.

Description

  • This application is a divisional of U.S. application Ser. No. 12/231,770, filed Sep. 5, 2008 and claims the benefit of U.S. Provisional Application No. 60/974,100, filed Sep. 21, 2007, the disclosure of which is herein incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to detection of contrast injection in fluoroscopic image sequences, and more particularly to detection of where and when a contrast agent is injected in a fluoroscopic image sequence.
  • Coronary angiography is a procedure that is recommended preoperatively for patients who are suffering from or at risk for coronary artery disease. Angiography is a medical imaging technique in which X-ray images are used to visualize internal blood filled structures, such as arteries, veins, and the heart chambers. Since blood has the same radiodensity as the surrounding tissues, these blood filled structures cannot be differentiated from the surrounding tissue using conventional radiology. In angiography, a catheter is inserted into a blood vessel, typically in the groin or the arm. The catheter is guided and positioned either in the heart or in arteries near the heart, and a contrast agent is added to the blood via the catheter to make the blood vessels in the heart visible via X-ray. As the contrast agent travels down the branches of the coronary artery, the vessel branches become visible in the X-ray (fluoroscopic) image. The X-ray images are taken over a period of time, which results in a sequence of fluoroscopic images.
  • The moment when the contrast is injected provides important temporal information for the automatic analysis of vessels. This temporal information can be used to trigger the starting of automatic vessel detection. For example, this temporal information can be used in implementing Digital Subtraction Angiography (DSA), which detects vessels by subtracting a pre-contrast image or “mask image” from later fluoroscopic images once the contrast agent has been introduced. Furthermore, the spatial location of the contrast injection point can be used as a starting point for vessel detection and tracking. Accordingly, it is desirable to detect the time and location of a contrast injection in a fluoroscopic image sequence.
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention provides a method and system for detecting the temporal and spatial location of a contrast injection in a sequence of fluoroscopic images. Embodiments of the present invention detect the contrast injection in a fluoroscopic image sequence as a 3-dimensional detection problem, with two spatial dimensions and one time dimension.
  • In one embodiment of the present invention, training volumes are received. Each training volumes is generated by stacking a sequence of 2D fluoroscopic images in time order and has two spatial dimensions and one temporal dimension. Each training volume is annotated with a ground truth contrast injection point. A heart rate is globally estimated for each training volume, and local frequency and phase is estimated in a neighborhood of the ground truth contrast injection point for each training volume. Frequency and phase invariant features are extracted from each training volume based on the heart rate, local frequency and phase. A detector, for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence, is trained based on the training volumes and the features extracted for each training volume. The detector can be trained using a probabilistic boosting tree (PBT).
  • In another embodiment of the present invention, a fluoroscopic image sequence is received, and a 3D volume is generated by stacking the fluoroscopic image sequence. The 3D volume has two spatial dimensions and one temporal dimension, and can be generated by stacking the 2D fluoroscopic images in time order and interpolating the stacked 2D fluoroscopic images to generate a continuous 3D volume. The spatial and temporal location of the contrast injection in the fluoroscopic image sequence is then detected by processing the 3D volume using a trained contrast injection detector. The trained contrast injection detector can be trained using a PBT based on training examples and frequency and phase invariant features extracted from the training examples.
  • These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an exemplary fluoroscopic image showing the start of a contrast injection;
  • FIG. 2 illustrates an exemplary volume generated from a sequence of 2D fluoroscopic images;
  • FIG. 3 illustrates a method of training a detector for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence according to an embodiment of the present invention;
  • FIG. 4 illustrates an exemplary volume of stacked fluoroscopic images and slices of the volume;
  • FIG. 5 illustrates an exemplary intensity profile and power spectrum;
  • FIG. 6 illustrates exemplary positive training examples;
  • FIG. 7 illustrates exemplary negative training examples;
  • FIG. 8 illustrates a method for detecting a spatial and temporal location of contrast injection in a fluoroscopic image sequence using a trained detector according to an embodiment of the present invention;
  • FIG. 9 illustrates exemplary contrast injection detection results; and
  • FIG. 10 is a high level block diagram of a computer capable of implementing the present invention.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • The present invention relates to detection of a contrast injection time and location in a fluoroscopic image sequence. Embodiments of the present invention are described herein to give a visual understanding of the contrast injection detection method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
  • A sequence of fluoroscopic images contains multiple 2D X-ray images obtained in real time. FIG. 1 illustrates an exemplary fluoroscopic image showing the start of a contrast injection. As illustrated in FIG. 1, image 100 is a fluoroscopic image with an annotated start location 102 of a contrast injection. A sequence of 2D fluoroscopic images can be stacked in order to generate a 3D volume with two spatial dimensions (x and y) and a time dimension (t). FIG. 2 illustrates an exemplary volume generated from a sequence of 2D fluoroscopic images. As illustrated in FIG. 2, image 202 is a 3D volume generated by stacking 2D fluoroscopic images in time order. Image 204 is a slice of volume 202 along a spatial-temporal plane, which shows the contrast in the blood vessels in the temporal domain. By the stacking a sequence of fluoroscopic images to generate a 3D volume, 3D volume detection can be used to determine a location in the volume for the contrast injection. The location in the volume gives both the spatial location (x,y) and the temporal location (t) of the contrast injection.
  • Embodiments of the present invention utilize the facts that vessel motion is mainly because of the heart beating and the heart beating is periodic. As shown in image 204 of FIG. 2, the contrast in the blood vessels in the temporal domain appears as a sinusoid wave 206, whereas the area without contrast on spatial-temporal slices will appear flat. Fourier transforms can be used to characterize the period motion that starts at the contrast injection point. Based on the periodic motion, the heart rate can be globally estimated from the time sequence angiogram, and the volume can be normalized with respect to the heart rate frequency to extract features that are invariant to patients' heart rates. Local phase and frequency estimation can also be performed to extract features invariant to local phase and frequency. The features can be used in training a detector to detect the location of the contrast injection in the volume.
  • FIG. 3 illustrates a method of training a detector for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence according to an embodiment of the present invention. At step 302, training volumes of stacked fluoroscopic image sequences are received. Each training volume is a series of 2D fluoroscopic images stacked in time order to form a 3D volume with two spatial dimensions (x,y) and a temporal dimension (t). The discrete frames of the fluoroscopic image sequence are stacked and interpolated based on the sampling rate of the sequence to generate a continuous 3D volume. Each training volume is annotated with the location (spatial and temporal) of a ground truth contrast injection point in the volume.
  • FIG. 4 illustrates an exemplary volume of stacked fluoroscopic images and slices of the volume. As illustrated in FIG. 4, image 402 is a 3D volume of a stack fluoroscopic image sequence. Image 404 is a slice of volume 402 in the x-y plane. Image 404 is an original 3D fluoroscopic image of the sequence. Image 406 is a slice of volume 402 in the x-t plane, and image 408 is a slice of volume 402 in the y-t plane. The location 410 of the contrast injection is shown in images 404, 406, and 408. In image 404, the location 410 shows the spatial location, and in images 406 and 408 the location 410 shows the temporal location.
  • Returning to FIG. 3, at step 304, the heart rate is globally estimated for each sequence (training volume). The heart is estimated for a time sequence angiogram (fluoroscopic image sequence) via Fourier analysis. For each location (x,y) in the sequence, a 1D signal sx,y[t]=I(x,y,t) is the intensity value of that location at time t. The power spectrum Px,y of this signal is generated by taking the square of the magnitude of the Discrete Fourier transform (DFT): Px,y=Sn·S*n. Averaging the power spectrum over all locations gives a global average power spectrum, yielding main frequency components of the image sequence as peaks in the average power spectrum. The heart rate for the sequence can then be determined by limiting the range of frequencies to realistic heart rates to rule out other periodic signals (such as breathing). FIG. 5 illustrates an exemplary intensity profile and power spectrum. As illustrated in FIG. 5, graph 502 shows an intensity profile of a point with a fixed (x,y). The horizontal axis of graph 502 is time and the vertical axis of graph 502 is the intensity value with the mean subtracted. Graph 504 shows the power spectrum of the intensity profile of graph 502. The peak 506 in the power spectrum 504 corresponds to the frequency of the periodic motion in the volume. Since the periodic motion is due to the heart beating, the frequency of the periodic motion is the heart rate. The estimated period based on the power spectrum 504 is 21.787 pixels. This corresponds to the pixel distance between peaks in the volume. The strong peak in the power spectrum corresponds to 47 in the x axis, which means the frequency is 47.
  • Returning to FIG. 3, at step 306, local phase and frequency is estimated in a neighborhood of the ground truth contrast injection point for each sequence (training volume). The local phase and frequency are also estimated using Fourier analysis, but signal generation is different than in the global estimation of the frequency. For a given ground truth contrast point location (x,y), the local phase is determined in a neighborhood from (x,y−w/2) to (x, y+2), where w is the window size. To generate a 1D time varying signal, the location of the maximum intensity from each value from (x,y−w/2) to (x,y+w/2), is stored for each value of t. This should correspond to the location of the center of the vessel in a single frame, which is assumed to be brighter than its neighbors because of the contrast. This 1D signal is expressed as sx,y,w[t]=max{I(x,j,t)|j>y−w/2 and j<w/2}. Sn, the DFT of sx,y,w[t], is calculated, and the local frequency f is estimated as a local maximum in the power spectrum which lies in a reasonable range for heart rates. The phase is estimated as tan−1(imag(Sn(f))/real(Sn(f))).
  • At step 308, frequency and phase invariant features are extracted from each sequence (training volume) using the estimated heart rate, local frequency, and phase. For classification of a candidate location (x,y,t) in a volume, a sub-window is aligned in time with the start of the estimated vessel period, such that the start of the sub-window is expressed as t_start=phase+floor((t−phase)/period)*period. Floor (x) is the largest integer which is not greater than x, and the period can be estimated as T=1024/f. The sub-window is extended in time for 2 periods to t_end=t_start+2*period. Thus, for a given amplitude a, the sub-window is from (x,y−a,t_start) to (x,y+a,t_end). For example, a can be fixed at 20 pixels.
  • Features are then defined relative to the sub-window and parameterized by a height and a shift as fractions of the amplitude and period. This provides invariance to differing phases and local frequencies in the same volume in different sub-windows, as well as different phases and frequencies (global heart rates and local frequencies) in different sequences. The height and shift can be discretized into 10 and 20 values, respectively. At each (height, shift) pair, features are generated based on intensity, gradient, difference in intensity one period ahead, and difference in intensity half a period ahead with inverted amplitude. At each value of shift, mean intensity features are generated for all heights (i.e., mean of the current column is the sub-window), and the difference in location of the maximum value in the previous and next shifts. Features are also generated that are global to the whole sub-window based on differences in intensity values in frames previous to the candidate location and in frames after the candidate location, as well as a feature based on the correlation between pixels for two consecutive heart beat periods. This means that more features are generated around the candidate location.
  • At step 310, a detector is trained based on the training volumes and the features extracted from the training volumes. As described above, each of the training volumes is annotated with the location of a ground truth contrast injection point. These ground truth locations are used as positive training examples, and other locations in the volumes are used as negative training examples. FIG. 6 illustrates exemplary positive training examples. As illustrated in FIG. 6, images 610, 620, 630, 640, 650, 660, 670, 680, and 690 are partial slices of training volumes and are annotated with ground truth contrast injection locations 612, 622, 632, 642, 652, 662, 672, 682, and 692, respectively. FIG. 7 illustrates exemplary negative training examples. As illustrated in FIG. 7, images 702, 704, 706, 708, 710, and 712 are partial slices of training volumes with no contrast injection points. The detector can be trained based on the positive and negative training examples using a probabilistic boosting tree (PBT) with the extracted features. A PBT detector is trained by recursively constructing a tree, where each of the nodes represents a strong classifier. Once the strong classifier of each node is trained, the input training data for the node is classified into two sets (positives and negatives) using the learned strong classifier. The two new sets are fed to left and right child nodes respectively to train the left and right child nodes. In this way, the PBT classifier is constructed recursively.
  • Once a detector is trained based on the training volumes and the extracted features, the detector can be used to detect the spatial and temporal location of a contrast injection in fluoroscopic image sequences. FIG. 8 illustrates a method for detecting a spatial and temporal location of contrast injection in a fluoroscopic image sequence using a trained detector according to an embodiment of the present invention. At step 802, a fluoroscopic image sequence is received. The fluoroscopic image can be received directly from an X-ray imaging device or can be loaded, for example from a memory or storage of a computer system, or some other computer readable medium.
  • At step 804, a 3D volume is generated from the fluoroscopic image sequence by stacking the 2D fluoroscopic images in the sequence. The fluoroscopic images are stacked in time order, and the discrete images are interpolated based on a sampling rate to generate a continuous 3D volume.
  • At step 806, the trained detector is used to detect the spatial and temporal location of the contrast injection point in the fluoroscopic image sequence. The detector is trained using the method of FIG. 3. The trained detector searches the volume to detect a contrast injection point (x,y,t) in the volume. The detector determines probabilities for candidate points in the volume to determine the point with the highest probability of being a contrast injection point. The contrast injection point (x,y,t) gives the spatial location (x,y) of the contrast injection and the temporal location (t) of the contrast injection.
  • FIG. 9 illustrates exemplary contrast injection detection results. The detection results of FIG. 9 were detected, as described in the method of FIG. 8, with a detector trained using the method of FIG. 3. As illustrated in FIG. 9, image 902 shows a detected contrast injection point 906 in the spatial domain and image 904 shows the detected contrast injection point 906 in the temporal domain. Accordingly, the location of the contrast injection point 906 in image 902 is the spatial location is the spatial location of the contrast injection, and the location of the contrast injection point 906 in image 904 is the temporal location of the contrast injection.
  • The spatial and temporal location of a contrast injection point can be used in automated image processing methods, such as vessel extraction or segmentation methods. For example, automated vessel segmentation methods, such as coronary digital subtraction angiography (DSA), may return erroneous results when trying to segment images in a fluoroscopic image sequence prior to the contrast injection. Such automated vessel segmentation methods can restrict segmentation to after the detected contrast injection point in a fluoroscopic image sequence. The spatial and temporal location of the contrast injection point can also be used in coronary DSA to determine a pre-contrast image, as well as for virtual contrast to determine the model cycle.
  • The above-described methods for contrast injection detection can be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high-level block diagram of such a computer is illustrated in FIG. 10. Computer 1002 contains a processor 1004 which controls the overall operation of the computer 1002 by executing computer program instructions which define such operation. The computer program instructions may be stored in a storage device 1012, or other computer readable medium (e.g., magnetic disk, CD ROM, etc.), and loaded into memory 1010 when execution of the computer program instructions is desired. Thus, the method steps of FIGS. 3 and 8 can be defined by the computer program instructions stored in the memory 1010 and/or storage 1012 and controlled by the processor 1004 executing the computer program instructions. An X-ray imaging device 1020 can be connected to the computer 1002 to input X-ray images to the computer 1002. It is possible to implement the X-ray imaging device 1020 and the computer 1002 as one device. It is also possible that the X-ray imaging device 1020 and the computer 1002 communicate wirelessly through a network. The computer 1002 also includes one or more network interfaces 1006 for communicating with other devices via a network. The computer 1002 also includes input/output devices 1008 that enable user interaction with the computer 1002 (e.g., display, keyboard, mouse, speakers, buttons, etc.) One skilled in the art will recognize that an implementation of an actual computer could contain other components as well, and that FIG. 10 is a high level representation of some of the components of such a computer for illustrative purposes.
  • The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.

Claims (11)

1. A method for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence, comprising:
receiving a fluoroscopic image sequence;
generating a 3D volume by stacking the fluoroscopic image sequence; and
detecting a spatial and temporal location of the contrast injection in the fluoroscopic image sequence by processing the 3D volume using a trained contrast injection detector.
2. The method of claim 1, wherein said step of generating a 3D volume comprises:
stacking a plurality of 2D fluoroscopic images in the fluoroscopic image sequence in time order; and
interpolating the stacked 2D fluoroscopic images to generate a continuous 3D volume having two spatial dimensions and one time dimension.
3. The method of claim 1, wherein the trained contrast injection detector is trained using a probabilistic boosting tree (PBT) based on training examples and frequency and phase invariant features extracted from the training examples.
4. The method of claim 1, wherein the 3D volume has two spatial dimensions and one temporal dimension and said step of detecting a spatial and temporal location of the contrast injection in the fluoroscopic image sequence comprises:
detecting a contrast injection point in the 3D volume using the contrast injection detector, wherein coordinates of the detected contrast injection point give the spatial and temporal location of the contrast injection.
5. An apparatus for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence, comprising:
means for receiving a fluoroscopic image sequence;
means for generating a 3D volume by stacking the fluoroscopic image sequence; and
means for detecting a spatial and temporal location of the contrast injection in the fluoroscopic image sequence by processing the 3D volume using a trained contrast injection detector.
6. The apparatus of claim 5, wherein said means for generating a 3D volume comprises:
means for stacking a plurality of 2D fluoroscopic images in the fluoroscopic image sequence in time order; and
means for interpolating the stacked 2D fluoroscopic images to generate a continuous 3D volume having two spatial dimensions and one time dimension.
7. The apparatus of claim 5, wherein the trained contrast injection detector is trained using a probabilistic boosting tree (PBT) based on training examples and frequency and phase invariant features extracted from the training examples.
8. A computer readable medium encoded with computer executable instructions for detecting a spatial and temporal location of a contrast injection in a fluoroscopic image sequence, the computer executable instructions defining steps comprising:
receiving a fluoroscopic image sequence;
generating a 3D volume by stacking the fluoroscopic image sequence; and
detecting a spatial and temporal location of the contrast injection in the fluoroscopic image sequence by processing the 3D volume using a trained contrast injection detector.
9. The computer readable medium of claim 8, wherein the computer executable instructions defining the step of generating a 3D volume comprise computer executable instructions defining the steps of:
stacking a plurality of 2D fluoroscopic images in the fluoroscopic image sequence in time order; and
interpolating the stacked 2D fluoroscopic images to generate a continuous 3D volume having two spatial dimensions and one time dimension.
10. The computer readable medium of claim 8, wherein the trained contrast injection detector is trained using a probabilistic boosting tree (PBT) based on training examples and frequency and phase invariant features extracted from the training examples.
11. The computer readable medium of claim 8, wherein the 3D volume has two spatial dimensions and one temporal dimension and the computer executable instructions defining the step of detecting a spatial and temporal location of the contrast injection in the fluoroscopic image sequence comprise computer executable instructions defining the step of:
detecting a contrast injection point in the 3D volume using the contrast injection detector, wherein coordinates of the detected contrast injection point give the spatial and temporal location of the contrast injection.
US13/455,619 2007-09-21 2012-04-25 Method and System for Detection of Contrast Injection Fluoroscopic Image Sequences Abandoned US20120257807A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/455,619 US20120257807A1 (en) 2007-09-21 2012-04-25 Method and System for Detection of Contrast Injection Fluoroscopic Image Sequences

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US97410007P 2007-09-21 2007-09-21
US12/231,770 US8194955B2 (en) 2007-09-21 2008-09-05 Method and system for detection of contrast injection in fluoroscopic image sequences
US13/455,619 US20120257807A1 (en) 2007-09-21 2012-04-25 Method and System for Detection of Contrast Injection Fluoroscopic Image Sequences

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/231,770 Division US8194955B2 (en) 2007-09-21 2008-09-05 Method and system for detection of contrast injection in fluoroscopic image sequences

Publications (1)

Publication Number Publication Date
US20120257807A1 true US20120257807A1 (en) 2012-10-11

Family

ID=40522468

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/231,770 Expired - Fee Related US8194955B2 (en) 2007-09-21 2008-09-05 Method and system for detection of contrast injection in fluoroscopic image sequences
US13/455,619 Abandoned US20120257807A1 (en) 2007-09-21 2012-04-25 Method and System for Detection of Contrast Injection Fluoroscopic Image Sequences

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/231,770 Expired - Fee Related US8194955B2 (en) 2007-09-21 2008-09-05 Method and system for detection of contrast injection in fluoroscopic image sequences

Country Status (1)

Country Link
US (2) US8194955B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11918423B2 (en) 2018-10-30 2024-03-05 Corindus, Inc. System and method for navigating a device through a path to a target location

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194955B2 (en) * 2007-09-21 2012-06-05 Siemens Aktiengesellschaft Method and system for detection of contrast injection in fluoroscopic image sequences
DE102008005923B4 (en) * 2008-01-24 2022-07-07 Siemens Healthcare Gmbh Method and device for automatic contrast agent phase classification of image data
DE102008048045A1 (en) * 2008-09-19 2010-04-01 Siemens Aktiengesellschaft A method for generating computer tomographic image data sets of a patient in cardiac CT in a perfusion control under contrast medium application
US20110002520A1 (en) * 2009-07-01 2011-01-06 Siemens Corporation Method and System for Automatic Contrast Phase Classification
US8731262B2 (en) 2010-06-03 2014-05-20 Siemens Medical Solutions Usa, Inc. Medical image and vessel characteristic data processing system
US8553963B2 (en) 2011-02-09 2013-10-08 Siemens Medical Solutions Usa, Inc. Digital subtraction angiography (DSA) motion compensated imaging system
US9292921B2 (en) 2011-03-07 2016-03-22 Siemens Aktiengesellschaft Method and system for contrast inflow detection in 2D fluoroscopic images
US8463012B2 (en) 2011-10-14 2013-06-11 Siemens Medical Solutions Usa, Inc. System for comparison of medical images
US10810455B2 (en) 2018-03-05 2020-10-20 Nvidia Corp. Spatio-temporal image metric for rendered animations
US11263481B1 (en) 2021-01-28 2022-03-01 International Business Machines Corporation Automated contrast phase based medical image selection/exclusion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095421A1 (en) * 2006-10-20 2008-04-24 Siemens Corporation Research, Inc. Registering 2d and 3d data using 3d ultrasound data
US20090010512A1 (en) * 2007-06-28 2009-01-08 Ying Zhu System and method for coronary digital subtraction angiography
US8194955B2 (en) * 2007-09-21 2012-06-05 Siemens Aktiengesellschaft Method and system for detection of contrast injection in fluoroscopic image sequences
US20120230558A1 (en) * 2011-03-07 2012-09-13 Siemens Corporation Method and System for Contrast Inflow Detection in 2D Fluoroscopic Images
US20130011030A1 (en) * 2011-07-07 2013-01-10 Siemens Aktiengesellschaft Method and System for Device Detection in 2D Medical Images
US20130129170A1 (en) * 2011-11-09 2013-05-23 Siemens Aktiengesellschaft Method and System for Precise Segmentation of the Left Atrium in C-Arm Computed Tomography Volumes
US8675914B2 (en) * 2009-09-14 2014-03-18 Siemens Aktiengesellschaft Method and system for needle tracking in fluoroscopic image sequences

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6458243A (en) * 1987-08-28 1989-03-06 Toshiba Corp X-ray image processing apparatus
DE19811349C1 (en) * 1998-03-16 1999-10-07 Siemens Ag Process for contrast substance tracing using medical imaging appts.
US20040215081A1 (en) * 2003-04-23 2004-10-28 Crane Robert L. Method for detection and display of extravasation and infiltration of fluids and substances in subdermal or intradermal tissue
CN1933774A (en) * 2004-04-07 2007-03-21 柯尼卡美能达医疗印刷器材株式会社 Radiation image photography system, radiation image photography program, and information storage medium
DE102004055461A1 (en) * 2004-11-17 2006-05-04 Siemens Ag Method for creation of image of coronary heart disease involves using images of computer tomograph taken at various stages
US20060173360A1 (en) * 2005-01-07 2006-08-03 Kalafut John F Method for detection and display of extravasation and infiltration of fluids and substances in subdermal or intradermal tissue
FR2884948B1 (en) * 2005-04-26 2009-01-23 Gen Electric METHOD AND DEVICE FOR REDUCING NOISE IN A SEQUENCE OF FLUOROSCOPIC IMAGES
US8396533B2 (en) * 2007-08-21 2013-03-12 Siemens Aktiengesellschaft Method and system for catheter detection and tracking in a fluoroscopic image sequence
US9195905B2 (en) * 2010-03-10 2015-11-24 Siemens Aktiengesellschaft Method and system for graph based interactive detection of curve structures in 2D fluoroscopy
US8548213B2 (en) * 2010-03-16 2013-10-01 Siemens Corporation Method and system for guiding catheter detection in fluoroscopic images
US8892186B2 (en) * 2010-09-20 2014-11-18 Siemens Aktiengesellschaft Method and system for detection and tracking of coronary sinus catheter electrodes in fluoroscopic images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095421A1 (en) * 2006-10-20 2008-04-24 Siemens Corporation Research, Inc. Registering 2d and 3d data using 3d ultrasound data
US8126239B2 (en) * 2006-10-20 2012-02-28 Siemens Aktiengesellschaft Registering 2D and 3D data using 3D ultrasound data
US20090010512A1 (en) * 2007-06-28 2009-01-08 Ying Zhu System and method for coronary digital subtraction angiography
US8194955B2 (en) * 2007-09-21 2012-06-05 Siemens Aktiengesellschaft Method and system for detection of contrast injection in fluoroscopic image sequences
US8675914B2 (en) * 2009-09-14 2014-03-18 Siemens Aktiengesellschaft Method and system for needle tracking in fluoroscopic image sequences
US20120230558A1 (en) * 2011-03-07 2012-09-13 Siemens Corporation Method and System for Contrast Inflow Detection in 2D Fluoroscopic Images
US20130011030A1 (en) * 2011-07-07 2013-01-10 Siemens Aktiengesellschaft Method and System for Device Detection in 2D Medical Images
US20130129170A1 (en) * 2011-11-09 2013-05-23 Siemens Aktiengesellschaft Method and System for Precise Segmentation of the Left Atrium in C-Arm Computed Tomography Volumes

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11918423B2 (en) 2018-10-30 2024-03-05 Corindus, Inc. System and method for navigating a device through a path to a target location

Also Published As

Publication number Publication date
US20090090873A1 (en) 2009-04-09
US8194955B2 (en) 2012-06-05

Similar Documents

Publication Publication Date Title
US8194955B2 (en) Method and system for detection of contrast injection in fluoroscopic image sequences
US8396533B2 (en) Method and system for catheter detection and tracking in a fluoroscopic image sequence
US8094903B2 (en) System and method for coronary digital subtraction angiography
US9761004B2 (en) Method and system for automatic detection of coronary stenosis in cardiac computed tomography data
US7864997B2 (en) Method, apparatus and computer program product for automatic segmenting of cardiac chambers
Saha et al. Topomorphologic separation of fused isointensity objects via multiscale opening: Separating arteries and veins in 3-D pulmonary CT
US8548213B2 (en) Method and system for guiding catheter detection in fluoroscopic images
US8582854B2 (en) Method and system for automatic coronary artery detection
US8121367B2 (en) Method and system for vessel segmentation in fluoroscopic images
US9792703B2 (en) Generating a synthetic two-dimensional mammogram
US9014423B2 (en) Method and system for catheter tracking in fluoroscopic images using adaptive discriminant learning and measurement fusion
US9999399B2 (en) Method and system for pigtail catheter motion prediction
JP2009504297A (en) Method and apparatus for automatic 4D coronary modeling and motion vector field estimation
US8948484B2 (en) Method and system for automatic view planning for cardiac magnetic resonance imaging acquisition
US9367924B2 (en) Method and system for segmentation of the liver in magnetic resonance images using multi-channel features
US9082158B2 (en) Method and system for real time stent enhancement on live 2D fluoroscopic scene
US20110002520A1 (en) Method and System for Automatic Contrast Phase Classification
JP2000048185A (en) Method for obtaining high quality image of desired structure, and system displaying the image
JPH05285125A (en) Method and device for extracting contour in studying multi-sliced and multi-phased heart mri by transmitting seed contour between images
US20120071755A1 (en) Method and System for Automatic Native and Bypass Coronary Ostia Detection in Cardiac Computed Tomography Volumes
US20110033102A1 (en) System and Method for Coronary Digital Subtraction Angiography
US9292921B2 (en) Method and system for contrast inflow detection in 2D fluoroscopic images
US10390799B2 (en) Apparatus and method for interpolating lesion detection
US10453184B2 (en) Image processing apparatus and X-ray diagnosis apparatus
US8675914B2 (en) Method and system for needle tracking in fluoroscopic image sequences

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION